Practical Course: Database Implementation
Content
Short version: In this practical course you will implement a main-memory database system from scratch.
Relational database systems are ubiquitous. Despite being designed decades ago, the fundamental architecture of these general-purpose systems has been virtually unchanged. In particular, these systems have been optimized for minimizing the number of disk I/O operations. In recent times the increasing amount of RAM and fundamental changes to the hardware landscape like the proliferation of multi-core CPUs have changed the underlying performance tradeoffs. The standard query evaluation model ("iterator model" based on interpretation), for example, incurs very large overheads when data mostly resides in RAM. While modern column stores, which enable very fast query processing but are not suitable for online transaction processing, have mostly converged on a common design, this is not the case for hybrid main-memory OLAP/OLTP systems. In this project we instead focus on such hybrid systems, where the architecture differs more between the different proposals.
As a consequence to these trends, core database systems research focusing on how to optimize database systems for modern hardware has proliferated. To design a high-performance database system for modern hardware, two fundamental high-level approaches can are possible. Some researchers start with traditional systems, try to find the bottlenecks, and optimize them one-by-one. The advantage of this approach is that one has a fully-featured system as a basis (e.g., the open source PostgreSQL system or the storage manager Shore). The disadvantage is that one is still constrained by the old architecture risking to get stuck in a "local minimum".
In this course we take the second, opposite approach. We to start with a very fast, but limited, kernel of database functionality and gradually extend the functionality to make it more "database-like". Specifically, we start by hand-coding the core parts of TPC-C, a common transaction processing (OLTP) benchmark, in C++. This enables very high performance from the start and allows to focus attention on fundamental design decision like how to store and index data most efficiently. It also allows for easy experimentation with the tradeoffs between row and column stores. We then focus on efficient analytical query processing. To this end, we again manually implement queries based on the previously designed physical database storage format and again without being constrained without being constrained by architectural design decisions.
In the second phase, we use compilation as our main technique to automatically generate the code that previously hand-coded. Instead of using the iterator model, this approach compiles queries directly to fast code that consists of tight loops and directly operates on the data representation. We use compilation both to generate the database storage data structures and access methods as well as code for individual queries. To compile complex, analytical multi-operator queries, which are generally represented as trees of algebra operators, we implement a generic query compiler that compiles a tree of algebra operators to machine code. For fast prototyping, we use C++ as the intermediate language, which is compiled at runtime by calling a C++ compiler.
Prerequisites
- lecture Fundamentals of Databases (IN0008) or similar course
- systems programming experience in C/C++
- beneficial: lecture Database Systems on Modern CPU Architectures (IN2118)
Organization
- Our weekly meetings will be every Thursday at 3:30pm in room 02.11.058. The first meeting is on 20 October.
- The course will be in English.
General Notes
Please note the following suggestions for your project: Notes on Programming Assignments. Here is a summary:- Use a build system, preferably make or cmake. Make is fairly easy to get started with, you find a simple tutorial e.g. here.
- Use a source code control system, e.g. Git. You can find an interactive tutorial e.g. here
- Use a memory checker to find (weird) bugs in your program, we recommend Valgrind
- If you prefer to use a (graphical) IDE, we can recommend Eclipse with CDT or KDevelop
- Use a debugger, e.g. gdb. Many IDEs have debugging features built in, e.g. Eclipse
- A nice tool to check the generated assembly instructions is this Interactive compiler
- If you use GCC (g++) as a compiler, here are some important compiler flags/options that may help you to find bugs:
- -g generate debug symbols for better debugging
- -O0 disable optimizations to allow for more reliable debugging, use -O3 instead when you measure performance
- -Wall generate helpful warnings
- -std=c++11 to enable C++11
- profiling with perf:
- profiling helpers
- record where time is spent: perf record -e cycles:pp -p PID (PID is the process id of the process you want to benchmark)
- show report of recorded events: perf report
- sample call graphs not just locations: perf record -e cycles:pp -p `pidof BINARYNAME` --call-graph=dwarf (perf record will show cumulative results)
- flame graphs
- count number of cache misses: perf stat -e cache-misses -p PID
- list of available events: perf list
- monitor CPU-specific perf events (not listed in perf list)
- find event for your CPU in Intel Software Developer's Manual Volume 3 Chapter 19
- you need the "Event Num." (e.g. D2) and "Umask Value" (e.g. 01)
- the perf event name starts with r, then concatenate umask and event (umask first!): perf record -e r01D2
- perf tutorial
Task 1
Implement the TPC-C "New Order" transaction and storage for all relations as a standalone C++ program.- Write a load procedure to populate the database with the data provided in the *.tbl files.
- Translate the "New Order" (neworder.script) transaction to C++ using the following mapping from SQL to C data types:
- integer: int32_t
- char,varchar: length plus character array (do not use pointers or std::string)
- timestamp: uint64_t (seconds since 01.01.1970)
- numeric(x,y): int64_t (x is the total number of digits, y is number of digits after the decimal separator, for example numeric(10,2) x=12.3 is represented as 1230)
- Test your implementation by translating newOrderRandom (from oltp.script) to C++ and executing it 1 million times; how many newOrder transactions per second can your implementation execute?
- Before and after executing the transactions print the number of tuples in the order, neworder, and orderline tables
Task 2
- Write a simple parser for SQL schema files. It should handle the TPC-C schema file. You can assume that all attributes are declared NOT NULL, and you must support additional non-primary (non-unique) key indexes. This code may be helpful, but it does not support all needed features.
- Write a generator that generates C++ code from the schema. It must be possible to store, insert, delete, access attributes, and find (using the primary key) tuples.
- (Manually) re-implement the newOrder transaction using the generated code.
- Add the delivery transaction to your existing TPC-C implementation.
- Execute 1 million transactions composed of a mix of newOrder (90%) and delivery (10%) transactions (see oltp.cpp); measure your performance. Send your solution to Viktor Leis until November 3 2016, 2pm.
Task 3
- Implement the following SQL query as a C++ program using STL data structures for the join implementation. Do not use any existing index structures (primary key indexes):
select sum(ol_quantity*ol_amount-c_balance*o_ol_cnt) from customer, "order", orderline where o_w_id = c_w_id and o_d_id = c_d_id and o_c_id = c_id and o_w_id = ol_w_id and o_d_id = ol_d_id and o_id = ol_o_id and c_last like 'B%'
- Run the query 10 times in sequence. For each run, print the result of the query (should be 1367872001.25 initially) and measure the execution times.
- Execute 1 million transactions (the newOrder and delivery mix). Using the fork system call, run the query concurrently. Whenever a query is finished, create a new snapshot using fork, so that exactly one snapshot and query is active at any given time. This example for using fork may be helpful.
- Measure how many transactions per second you can process using this snapshotting model, how long the queries take on average, and how long the fork takes.
Task 4
- Implement query compilation of relational algebra trees to C++ code. Use the produce/consume (push) model. You need to support the table scan, selection, print, and inner join operators.
- Manually build an algebra tree for the following SQL query:
select c_first, c_last, o_all_local, ol_amount from customer, "order", orderline where o_w_id = c_w_id and o_d_id = c_d_id and o_c_id = c_id and o_w_id = ol_w_id and o_d_id = ol_d_id and o_id = ol_o_id and c_id = 322 and c_w_id = 1 and c_d_id = 1
- Compile the expression to C++ code and execute the resulting code (including a print operator at the root).
- For reading the generated code it may be useful to indent it using clang-format
Task 5
- Implement a simple parser for the following subset of SQL:
- The select clause contains one or more attribute names.
- The from clause contains one or more relation names.
- The where clause is optional and can contain one or more selections (connected by "and"). You only need to support selections of the form "attribute = attribute" and "attribute = constant".
- You can assume that each relation is referenced only once, and that all attribute names are unique.
- Implement semantic analysis, which takes the parsed SQL statement and the schema as input and computes an algebra tree that can be used as input to your code generator. Report errors when non-existing attributes or tables are referenced. Also, you should report an error if a table has no join condition (i.e., a cross product would be necessary). Build left-deep join trees based on the order of the relations in the from clause.
-
Implement a "read-eval-print-loop" for your database system:
- read one line from standard input
- analyze it (report any errors and abort if necessary)
- generate c++ code and store it in a temporary file
- call a c++ compiler and generate a temporary shared library (.so file)
- using dlopen, dlsym, and dlclose you can now load the shared library and execute the query in it (have a look at the example), you man need to compile with -fPIC and -rdynamic
- Measure and report the compilation and the execution times separately.
- hint: have a look at rlwrap tool
-
Test the following queries:
select w_id from warehouse; select c_id, c_first, c_middle, c_last from customer where c_last = 'BARBARBAR'; select d_name from district, order where d_w_id = o_w_id and o_d_id = d_id and o_id = 7; select o_id, ol_dist_info from order, orderline where o_id = ol_o_id and ol_d_id = o_d_id and o_w_id = ol_w_id and ol_number = 1 and ol_o_id = 100; select o_id, ol_dist_info from orderline, order where o_id = ol_o_id and ol_d_id = o_d_id and o_w_id = ol_w_id and ol_number = 1 and ol_o_id = 100; select c_last, o_id, i_id, ol_dist_info from customer, order, orderline, item where c_id = o_c_id and c_d_id = o_d_id and c_w_id = o_w_id and o_id = ol_o_id and ol_d_id = o_d_id and o_w_id = ol_w_id and ol_number = 1 and ol_o_id = 100 and ol_i_id = i_id; select c_last, o_id, ol_dist_info from customer, order, orderline where c_id = o_c_id and o_id = ol_o_id and ol_d_id = o_d_id and o_w_id = ol_w_id and ol_number = 1 and ol_o_id = 100;
Project Topics
- Worst-case optimal joins
- Fast scans:
- Row/Column Store Hybrids
- Bw-Tree/Masstree:
- Hardware Transactional Memory/Synchronization:
- Compression:
- Column-Store Query Engines:
- Cracking:
- Transaction Processing (Hekaton):
- Transaction Processing (Silo):
- Hash Joins:
- Distributed Query Processing: