Week 41 / 2023

Mohamed Saif published on
23 min, 4444 words

Categories: postgreSQL

Python Packaging

  • the term "Python packages" when we talk about a directory containing Python modules. The term "release" is used to define one version of a project, and the term "distribution" defines a source or a binary distribution of a release as something like a tarball or zip file.

SchemaSPY

  • docker run --it --add-host host.docker.internal:host-gateway --network="host" -v schemaspy-config:/schemaspy-config -v output-directory:/output schemaspy/schemaspy -t pgsql11 -host localhost -port 5432 -db [database-name] -u postgres -p [password] -s public bash
  • docker ps -a | grep schemaspy/schemaspy
  • docker cp 8d658cd8c9bd:/output/ [host output directory]

PostgreSQL Query Optimization

  • you’ll be prepared to optimize in precisely this fashion: as an integrated part of query development.
  • however, the most important thing is to understand how a database engine processes a query and how a query planner decides what execution path to choose.
  • “Think like a database!”
  • The underlying source of the problem is that SQL is a declarative language. We describe what we want, but we don’t describe how to get it.
  • determined by many different factors, such as storage structures, indexes, and data statistics.
  • "With the query constructed like this, you are not letting the query planner choose the best execution path, because the sequence of actions is hard-coded."
  • "Although the preceding statement is written in a declarative language, it is imperative by nature."
  • Instead, to write a declarative query, simply specify what you need to retrieve from the database.
  • what execution time is “good enough.”
  • How do we define optimization goals? We use the familiar SMART goal framework. SMART goals are • Specific • Measurable • Achievable (attainable) • Result-based (relevant) • Time-bound (time-driven)
  • A SQL query cannot be optimized in isolation, outside the context of its purpose and the environment in which it is executed.
  • Finding out the business intent of what is to be done might be the first and the most critical optimization step.
  • short queries and long queries
  • In the majority of cases, in OLTP systems we are optimizing short queries and in OLAP systems both short and long queries.
  • the concept of “first write and then optimize” and that this book's goal is to help you write queries right right away.
  • Database Design matters
  • Query execution time may change because data volume increased or the data distribution changed or execution frequency increased.
  • Another PostgreSQL feature you should be aware of is the difference between the execution of parameterized queries and dynamic SQL.
  • In order to produce query results, PostgreSQL performs the following steps: • Compile and transform a SQL statement into an expression consisting of high-level logical operations, known as a logical plan. • Optimize the logical plan and convert it into an execution plan. • Execute (interpret) the plan and return results.
  • Overview of PostgreSQL Internals:
    1. Connection
    1. Parser Stage
    1. Rewriter Stage
    1. Planner/Optimizer Stage
    1. Executor Stage, recursively executes the plan
  • Connection:
  • PostgreSQL implements a “process per user” client/server model. In this model, every client process connects to exactly one backend process.
  • “supervisor process”, which is responsible for managing the backend processes. that spawns a new backend process every time a new client connects to the database.
  • Those backend processes communicate with each other and with other processes of the instance using semaphores and shared memory to ensure data integrity throughout concurrent data access.
  • Parser Stage:
  • The parser is responsible for checking the syntax of the query and transforming it into a query tree.
  • parser and transformation process
  • The parser and lexer are implemented using the well-known Unix tools bison and flex.
  • scan.l The Lexer is responsible for breaking the input text into tokens. For every key word or identifier that is found, a token is generated and handed to the parser.
  • gram.y The Parser is responsible for checking the syntax of the query and transforming it into a query tree.
  • The parser stage creates a parse tree using only fixed rules about the syntactic structure of SQL. It does not make any lookups in the system catalogs, so there is no possibility to understand the detailed semantics of the requested operations. After the parser completes, the transformation process takes the tree handed back by the parser as input and does the semantic interpretation needed to understand which tables, functions, and operators are referenced by the query. The data structure that is built to represent this information is called the query tree.
  • Rewriter Stage:
  • The rewriter stage is responsible for expanding views and inline functions, flattening subqueries, and performing other transformations that are needed to produce a query tree that is ready for optimization.
  • Planner/Optimizer Stage:
  • The task of the planner/optimizer is to create an optimal execution plan. The planner/optimizer is a cost-based optimizer, which means that it tries to find the cheapest plan to execute the query.
  • After the cheapest path is determined, a full-fledged plan tree is built to pass to the executor.
  • index's operator class?
  • The possible plans are determined by the available indexes on each relation. There is always the possibility of performing a sequential scan on a relation, so a sequential scan plan is always created.
  • The finished plan tree consists of sequential or index scans of the base relations, plus nested-loop, merge, or hash join nodes as needed, plus any auxiliary steps needed, such as sort nodes or aggregate-function calculation nodes
  • Most of these plan node types have the additional ability to do selection (discarding rows that do not meet a specified Boolean condition) and projection (computation of a derived column set based on given column values, that is, evaluation of scalar expressions where needed).
  • One of the responsibilities of the planner is to attach selection conditions from the WHERE clause and computation of required output expressions to the most appropriate nodes of the plan tree.
  • The planner also has to decide which join method to use for each join. The join method is the algorithm that is used to combine rows from two relations.
  • Executor Stage:
  • The executor stage is responsible for executing the plan tree and returning the results to the client.
  • ..but the general approach is the same: each node computes and returns its next output row each time it is called. Each node is also responsible for applying any selection or projection expressions that were assigned to it by the planner.
  • Relational Operations:
  • Any relational operation takes one or more relations as its arguments and produces another relation as its output.
  • ..filter, project, and product.
  • Filter is a unary operation that takes a relation as its input and produces a relation as its output. It discards rows that do not meet a specified Boolean condition(s).
  • Project is a unary operation that takes a relation as its input and produces a relation as its output. It computes a derived column set based on given column values, that is, evaluation of scalar expressions where needed. takes a single relation as an argument and removes some attributes (columns). The relational project operation also removes duplicates from the output, while the SQL project operation does not. To achieve the same result in PostgreSQL, we would need to add the distinct keyword to the SELECT clause.
  • Product is a binary operation that takes two relations as its arguments and produces a relation as its output. It computes the Cartesian product of the two input relations. The Cartesian product of two relations is a relation that contains all possible combinations of rows from the two input relations.
  • a join operation can be expressed as a product followed by filtering.
  • The last piece of relational theory which we need for optimization is equivalence rules.
    1. Commutativity: A join operation is commutative, which means that the order of the arguments does not matter. A join of A and B is equivalent to a join of B and A. A JOIN B = B JOIN A
    1. Associativity: A join operation is associative, which means that the order of the arguments does not matter. A join of A, B, and C is equivalent to a join of A and a join of B and C. (A JOIN B) JOIN C = A JOIN (B JOIN C)
    1. Distributivity: A join operation distributes over a union operation. A join of A and a union of B and C is equivalent to a union of a join of A and B and a join of A and C. A JOIN (B UNION C) = (A JOIN B) UNION (A JOIN C)
  • Equivalences ensure that a query may be represented with several different expressions, providing the impetus for an optimizer.
  • Other additional operations are needed to represent SQL constructs that cannot be expressed in relational theory, for example, left, right, and full outer joins produce a result that is not a relation (but still is a SQL table).
  • The most important resources are those that affect execution time, namely, CPU cycles and I/O accesses (read/write disk blocks). The optimizer uses these resources to estimate the cost of each plan.
  • To represent cost models, we’ll use simple formulas with the following notation: for any table or relation R, TR and BR denote the number of rows in the table and the number of storage blocks occupied by the table, respectively.
  • The efficiency of such operations depends on the ratio of rows that are retained to the total rows in the stored table. This ratio is called selectivity. The choice of algorithm for a given read operation depends on the selectivity of filters that can be simultaneously applied.
  • by default, PostgreSQL uses blocks containing 8192 bytes each. A block is the unit that is transferred between the hard drive and the main memory, and the number of I/O operations needed to execute any data access is equal to the number of blocks that are being read or written.
  • First, they are “redundant” database objects; they do not store any additional information that can’t be found in the source table itself. Second, indexes provide additional data access paths; they allow us to determine what values are stored in the rows of a table without actually reading the table
  • The choice of the best data access algorithm depends mostly on query selectivity.
  • We intentionally omitted all numbers on this chart as they depend on hardware and table size, while the qualitative comparison does not.
  • The most interesting point is the intersection of two lines: for smaller values of selectivity, index-based access is preferable, while a full scan is better for larger values of selectivity
  • so the additional overhead of indexes is higher for a given proportion of rows.
  • However, the cost of updates (such as insertions of new keys) can be very high for both ordered lists and binary trees: an insertion of a single record can cause complete restructuring. In contrast, B-trees can be modified without significant overhead. When a record is inserted, the restructuring is limited to one block. If the block capacity is exceeded, then the block is split into two blocks, and the update is propagated to upper levels. In the worst case, the number of modified blocks cannot exceed the depth of the tree.
  • A bitmap is an auxiliary data structure that is used internally in PostgreSQL for several different purposes. Bitmaps can be considered a kind of index: they are built to facilitate access to other data structures containing several data blocks. Typically, bitmaps are used to compactly represent properties of table data.
  • Combining Relations:
  • Nested-loop join is the simplest join algorithm. It is used when one of the relations is small enough to fit into the main memory. The algorithm is very simple: for each row of the outer relation, we scan the inner relation and check whether the join condition is satisfied. If so, we output the resulting row.
  • Hash join is a more sophisticated algorithm that is used when both relations are too large to fit into the main memory. The algorithm consists of two phases. In the first phase, we build a hash table on the inner relation. In the second phase, we scan the outer relation and probe the hash table to find matching rows.
  • The basic version of the hash join algorithm includes two phases: 1. During the build phase, all tuples of R are stored in buckets according to the values of the hash function. 2. In the probe phase, each row of table S is sent to an appropriate bucket. If matching rows of table R are in the bucket, output rows are produced. The easiest way to find matching rows in the bucket is to use nested loops (actually loop over all rows in the bucket for each row of S). PostgreSQL uses a better matching algorithm based on Bloom filtering.
  • The two phases of the hash-based algorithm are shown as separate physical operations in the execution plan.
  • The basic hash join algorithm works if all buckets produced at the build phase can fit into main memory.
  • Hybrid hash join is a modification of the hash join algorithm that allows us to process relations that are larger than main memory. The idea is to split the build phase into several steps. In each step, we build a hash table on a subset of the inner relation. In the probe phase, we probe all hash tables to find matching rows.
  • Sort-merge join is a join algorithm that is used when both relations are too large to fit into the main memory. The algorithm consists of two phases. In the first phase, we sort both relations on the join key. In the second phase, we scan both relations in parallel and output matching rows.
  • While a SELECT defines what needs to be done, an execution plan defines how to execute SQL operations.
  • The job of the optimizer is to build the best possible physical plan that implements a given logical plan. This is a complex process: sometimes, a complex logical operation is replaced with multiple physical operations, or several logical operations are merged into a single physical operation.
  • To build a plan, the optimizer uses transformation rules, heuristics, and cost-based optimization algorithms.
  • Specifically, a plan contains estimations of costs, expected number of rows in the output, and expected average width of the output rows. All these values are calculated from the database statistics. The values of costs include the accumulated cost of all pervious operations. There are two cost estimations for each operation: the first shows the cost needed to produce the first row of output, while the second estimates the cost of the complete result
  • It’s important to emphasize that all these numbers are approximate. The actual values obtained during execution may differ. If you suspect that the optimizer chose a plan that is not optimal, you might need to look at these estimates.
  • It shows each node of the tree on a separate line starting with ->, with the depth of the node represented by the offset. Subtrees are placed after their parent node. Some operations are represented with two lines.
  • The execution of a plan starts from the leaves and ends at the root. This means that the operation that is executed first will be on the line that has the rightmost offset. Of course, a plan may contain several leaf nodes that are executed independently. As soon as an operation produces an output row, this row is pushed to the next operation. Thus, there is no need to store intermediate results between operations.
  • Since several filters are applied to the table and only one of the filtering conditions is supported by the index, PostgreSQL performs an index bitmap scan (covered in Chapter 2).
  • “I have a query, and it is slow, and you tell me to look at the execution plan, and it is 100+ lines long. What should I do? Where should I start?”
  • The good news is that most of the time, you do not need to read the whole plan to understand what exactly makes the execution slow. In this section, we will learn more about interpreting execution plans.
  • Plans may vary in order of operations, Algorithms used for joins and other operations, Data retrieval methods
  • Fortunately, PostgreSQL does not check every possible plan.
  • The cost-based optimization algorithm relies on the optimality principle:
  • For example, in the preceding example, once the optimizer selects the correct data retrieval algorithm for one of the three tables, it will not consider any plans that do not use this optimal algorithm.
  • Heuristics cut out parts of the plan space that are unlikely to contain optimal plans, reducing the number of plans examined by the optimization algorithm. While this feature helps the optimizer select an execution plan more quickly, it can also affect performance negatively: there is a risk that the best execution plan will be accidentally dropped before the cost comparison. Although heuristics may cut out the optimal plan, the algorithm builds the best of the remaining plans.
  • The cost of each execution plan depends on • Cost formulas of algorithms used in the plan • Statistical data on tables and indexes, including distribution of values • System settings (parameters and preferences), such as join_ collapse_limit or cpu_index_tuple_cost
  • However, due to the factors listed in the preceding list, the optimizer may produce different execution plans for nearly identical SQL queries or even for the same query.
  • The first step is to identify whether the query is a short query or a long query.
  • What is a short query? First, it has nothing to do with the length of the SQL query. Second, it is not defined by the size of the result set.
  • A query is short when the number of rows needed to compute its output is small, no matter how large the involved tables are. Short queries may read every row from small tables but read only a small percentage of rows from large tables.
  • When we optimize a short query, we know that in the end, we select a relatively small number of records. This means that the optimization goal is to reduce the size of the result set as early as possible. If the most restrictive selection criterion is applied in the first steps of query execution, further sorting, grouping, and even joins will be less expensive.
  • you can't select a subset of records quickly from a table if there is no index supporting the corresponding search. That's why short queries require indexes for faster execution. If there is no index to support a highly restrictive query, in all likelihood, one needs to be created.
  • the smaller the number of records that correspond to one value of the index, the lower the index’s selectivity value. We do not want to create indexes with high selectivity;
  • The best performance possible for a short query occurs when the most restrictive indexes (i.e., indexes with the lowest selectivity) are used.
  • Can you tell which filtering criterion is the most restrictive?
  • But if the most restrictive criterion is not indexed, the execution plan may be suboptimal, and likely, an additional index is needed.
  • What is the difference between a primary key and a unique constraint? The answer is that there is no difference. A primary key is a unique constraint with a not null constraint. The only reason to use a primary key is to mark the column as the primary key in the system catalogs.
  • An index is unique if for each indexed value there is exactly one matching row in the table.
  • It is also possible to create a unique index without formally defining a unique constraint. All you have to do is to add the keyword unique to the index creation statement: CREATE UNIQUE INDEX ON table (column);
  • If we create this index after data was already inserted into this table, CREATE UNIQUE INDEX will validate the uniqueness of values, and if any duplicates are found, the index won’t be created. For any subsequent inserts and updates, the uniqueness of new values will be validated as well.
  • because B-tree indexes do not support searches with the “like” operator.
  • pattern search index: CREATE INDEX ON table (column varchar_pattern_ops);
  • There are three separate approaches to pattern matching provided by PostgreSQL: the traditional SQL LIKE operator, the more recent SIMILAR TO operator (added in SQL:1999), and POSIX-style reg- ular expressions.
  • If pattern does not contain percent signs or underscores, then the pattern only represents the string itself; in that case LIKE acts like the equals operator.
  • The escape character is a special character that can be used to indicate that the character following it should be treated as a literal character rather than as a wildcard in the pattern. SELECT * FROM employees WHERE last_name LIKE 'Smith\_%' ESCAPE '\'; matches any employee whose last name is Smith_ followed by any character.
  • _ matches any single character. % matches any sequence of zero or more characters.
  • The operator ~ ~ is equivalent to LIKE, and ~ ~ * corresponds to ILIKE. There are also ! ~ ~ and ! ~ ~* operators that represent NOT LIKE and NOT ILIKE, respectively.
  • You may see these operator names in EXPLAIN output and similar places, since the parser actually translates LIKE et al. to these operators.
  • ^ @ starts with, $ @ ends with. ^ @ is equivalent to LIKE 'pattern%', and $ @ is equivalent to LIKE '%pattern'.
  • Multiple Indexes:
  • The answer is in the word bitmap, as seen in the execution plan. Creating in-memory bitmaps allows the optimizer to use multiple indexes on one table to speed up data access.
  • After this process is completed, the only blocks left are the blocks that satisfy all search criteria, and PostgreSQL reads all the records in the remaining blocks to recheck the search conditions.
  • Remember that the lower the selectivity is, the faster the search is, and when we are optimizing short queries, our goal is to avoid reading a large number of rows at any given point (even if we will be able to filter them out later).
  • Covering Indexes: A covering index is specifically designed to include the columns needed by a particular type of query that you run frequently.
  • To solve this performance problem, PostgreSQL supports index-only scans, which can answer queries from an index alone without any heap access.
  • There are two fundamental restrictions on when this method can be used:
    1. The index type must support index-only scans. B-tree indexes always do. The underlying requirement is that the index must physically store, or else be able to reconstruct, the original data value for each index entry.
    1. The query must reference only columns stored in the index.
  • To make effective use of the index-only scan feature, you might choose to create a covering index, which is an index specifically designed to include the columns needed by a particular type of query that you run frequently.
  • Since queries typically need to retrieve more columns than just the ones they search on, PostgreSQL allows you to create an index in which some columns are just “payload” and are not part of the search key
  • Excessive selection:
  • This practice is called using excessive selection criteria. The intent is to use this additional filter to preselect a small subset of records from a large table.
  • Recall that a query is short if it needs a small number of rows to compute results. Indeed, in this case, the number of rows we need is small, but there is no easy way to find them. So here is the caveat: it is not just that a short query requires a small number of rows, but also that the number of rows in the result of any intermediate operation should also be small. If a query with three joins is short and, after executing the first of the joins, the intermediate result is huge, it means that something is wrong with the execution plan.
  • The index contains entries only for those table rows that satisfy the predicate.
    1. To exclude common values
    1. to Exclude Uninteresting Values.
  • ِAgain, in short queries, the optimization goal is to avoid large intermediate results. That means ensuring that the most restrictive selection criteria are applied first. After that, for each join operation, we should ensure that the result continues to be small.
  • Is there a way to avoid using existing indexes? Most of the time, the optimizer is smart enough to figure out when it should or shouldn’t use indexes.
  • PostgreSQL modifies execution plans based on data statistics.
  • Two major reasons against creating too many indexes are that indexes take up extra space in the database and that insert and update operations become slow when there are too many indexes to update along with the record itself.
  • The prevailing guidance used to be to drop all indexes on a table before a bulk load and then to create them again.
  • Since then, times have changed. With current hard- and software, the situation is different. Disk storage is cheaper, disks are faster, and, in general, fast response is more valuable than saving disk space.
  • But still, the question remains: when is enough enough?
Chapter 6: Long Queries
  • This is a dangerous practice. If report performance is neglected, performance can easily degrade from minutes to hours or more.
  • A query is considered long when query selectivity is high for at least one of the large tables; that is, almost all rows contribute to the output, even when the output size is small.
  • to improve the performance of a long query by an order of several hundred times. Such improvements are made possible when two optimization strategies are applied: 1. Avoid multiple table scans. 2. Reduce the size of the result at the earliest possible stage.
  • indexes are not needed, and if tables are indexed, we want to ensure that indexes are not used.
  • Why are full table scans desirable for long queries?
  • a hash join algorithm is used, and that’s exactly what we hope to see in the execution plan of a long query.
  • Hash joins work best when the first argument fits into main memory. The size of memory available can be tuned with server parameters.
  • For short queries, the desired join order is the one that would prompt the usage of indexes with lower selectivity first.
  • The most restrictive joins (i.e., joins that reduce the result set size the most) should be executed first.
  • Semi-joins never increase the size of the result set; check whether it is beneficial to apply them first.
  • An anti-join between two tables R and S returns rows from table R for which there are no rows from table S with a matching value in the joining column.