PostgreSQL 9.5 有哪些mysql5.7.9新特性性

二次元同好交流新大陆
扫码下载App
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!&&|&&
LOFTER精选
网易考拉推荐
用微信&&“扫一扫”
将文章分享到朋友圈。
用易信&&“扫一扫”
将文章分享到朋友圈。
& &&& PostgreSQL 9.5 正式版还没发行,但 Alpha 2 版已经发行, 主要特性包括 IMPORT FOREIGN SCHEMA,Row-Level Security Policies,BRIN 索引,JSONB 数据类型操作的增强,以及 UPSERT 和 pg_rewind 等,详细如下:&PostgreSQL9.5 新特性& & &
阅读(1976)|
用微信&&“扫一扫”
将文章分享到朋友圈。
用易信&&“扫一扫”
将文章分享到朋友圈。
历史上的今天
在LOFTER的更多文章
loftPermalink:'',
id:'fks_',
blogTitle:'PostgreSQL9.5 新特性汇总',
blogAbstract:'& &&& PostgreSQL 9.5 正式版还没发行,但 Alpha 2 版已经发行, 主要特性包括 IMPORT FOREIGN SCHEMA,Row-Level Security Policies,BRIN 索引,JSONB 数据类型操作的增强,以及 UPSERT 和 pg_rewind 等,详细如下:&PostgreSQL9.5 新特性',
blogTag:'',
blogUrl:'blog/static/',
isPublished:1,
istop:false,
modifyTime:8,
publishTime:0,
permalink:'blog/static/',
commentCount:0,
mainCommentCount:0,
recommendCount:1,
bsrk:-100,
publisherId:0,
recomBlogHome:false,
currentRecomBlog:false,
attachmentsFileIds:[],
groupInfo:{},
friendstatus:'none',
followstatus:'unFollow',
pubSucc:'',
visitorProvince:'',
visitorCity:'',
visitorNewUser:false,
postAddInfo:{},
mset:'000',
remindgoodnightblog:false,
isBlackVisitor:false,
isShowYodaoAd:false,
hostIntro:'',
hmcon:'1',
selfRecomBlogCount:'0',
lofter_single:''
{list a as x}
{if x.moveFrom=='wap'}
{elseif x.moveFrom=='iphone'}
{elseif x.moveFrom=='android'}
{elseif x.moveFrom=='mobile'}
${a.selfIntro|escape}{if great260}${suplement}{/if}
{list a as x}
推荐过这篇日志的人:
{list a as x}
{if !!b&&b.length>0}
他们还推荐了:
{list b as y}
转载记录:
{list d as x}
{list a as x}
{list a as x}
{list a as x}
{list a as x}
{if x_index>4}{break}{/if}
${fn2(x.publishTime,'yyyy-MM-dd HH:mm:ss')}
{list a as x}
{if !!(blogDetail.preBlogPermalink)}
{if !!(blogDetail.nextBlogPermalink)}
{list a as x}
{if defined('newslist')&&newslist.length>0}
{list newslist as x}
{if x_index>7}{break}{/if}
{list a as x}
{var first_option =}
{list x.voteDetailList as voteToOption}
{if voteToOption==1}
{if first_option==false},{/if}&&“${b[voteToOption_index]}”&&
{if (x.role!="-1") },“我是${c[x.role]}”&&{/if}
&&&&&&&&${fn1(x.voteTime)}
{if x.userName==''}{/if}
网易公司版权所有&&
{list x.l as y}
{if defined('wl')}
{list wl as x}{/list}PostgreSQL 9.5 有哪些新特性? - 推酷
PostgreSQL 9.5 有哪些新特性?
(This page is currently under development ahead of the release of PostgreSQL 9.5)
This page contains an overview of PostgreSQL Version 9.5's features, including descriptions, testing and usage information, and links to blog posts containing further information. See also
Major new features
IMPORT FOREIGN SCHEMA
Row-Level Security Policies
BRIN Indexes
Foreign Table Inheritance
GROUPING SETS, CUBE and ROLLUP
JSONB-modifying operators and functions
jsonb || jsonb (concatenate / overwrite)
jsonb - text / int (remove key / array element)
jsonb - text[] / int (remove key / array element in path)
jsonb_replace function
jsonb_pretty
INSERT ... ON CONFLICT DO NOTHING/UPDATE
Other new features
ALTER TABLE ... SET LOGGED / UNLOGGED
SKIP LOCKED
Parallel VACUUMing
Abbreviated Keys
GiST Index-Only Scans
Major new features
IMPORT FOREIGN SCHEMA
Previously, in order to create a foreign table in PostgreSQL, you would need to define the table, referencing the destination columns and data types, and if you have a lot of tables, this can become tedious and error-prone, and when those tables change, you need to do it all over again...
CREATE FOREIGN TABLE remote.customers (
id int NOT NULL,
name text,
company text,
registered_date date,
expiry_date date,
active boolean,
status text,
account_level text) SERVER dest_server OPTIONS (schema_name 'public');
CREATE FOREIGN TABLE remote.purchases (
id int NOT NULL,
purchase_time timestamptz,
payment_time timestamptz,
itemid int,
volume int,
invoice_sent boolean) SERVER dest_server OPTIONS (schema_name 'public');
As of PostgreSQL 9.5, you can import tables en masse:
IMPORT FOREIGN SCHEMA public
FROM SERVER dest_server INTO
This would create foreign tables in the schema named &remote& for every table that appeared in the public schema on the foreign server labelled &dest_server&.
You can also filter out any tables you don't wish:
IMPORT FOREIGN SCHEMA public
EXCEPT (reports, audit)
FROM SERVER dest_server INTO
Or limit it to just a specific set of tables:
IMPORT FOREIGN SCHEMA public
LIMIT TO (customers, purchases)
FROM SERVER dest_server INTO
Row-Level Security Policies
Additional security can be added to tables to prevent users from accessing rows they shouldn't be able to see.
Say you had a table with log data, where the username column contained the database user name which created the log entry:
CREATE TABLE log (
id serial primary key,
username text,
log_event text);
But you don't want users to see the log entries from other users, so we create a policy that says you're allowed to see the row if the username column matches the current user running the query:
CREATE POLICY policy_user_log ON log
USING (username = current_user);
And then we enable Row Level Security on the table:
ALTER TABLE log
ENABLE ROW LEVEL SECURITY;
As the user &report&, we would then only see rows where the username column contained the value 'report':
# SELECT * FROM
id | username |
----+----------+----------------
1 | report
| DELETE issued
4 | report
| Reset accounts
As the user &messaging&, we see a different set of rows:
id | username
----+-----------+----------------------
2 | messaging | Message queue purged
3 | messaging | Reset accounts
Whereas the &postgres& user, as the superuser would get:
id | username
----+-----------+----------------------
1 | report
| DELETE issued
2 | messaging | Message queue purged
3 | messaging | Reset accounts
4 | report
| Reset accounts
That's because the superuser sees all rows due to the BYPASSRLS attribute on the superuser role by default.
BRIN Indexes
BRIN stands for Block Range INdexes, and store metadata on a range of pages. At the moment this means the minimum and maximum values per block.
This results in an inexpensive index that occupies a very small amount of space, and can speed up queries in extremely large tables. This allows the index to determine which blocks are the only ones worth checking, and all others can be skipped. So if a 10GB table of order contained rows that were generally in order of order date, a BRIN index on the order_date column would allow the majority of the table to be skipped rather than performing a full sequential scan. This will still be slower than a regular BTREE index on the same column, but with the benefits of it being far smaller and requires less maintenance.
For example:
-- Create the table
CREATE TABLE orders (
order_date timestamptz,
item text);
-- Insert lots of data into it
INSERT INTO orders (order_date, item)
SELECT x, 'dfiojdso'
FROM generate_series(' 00:00:00'::timestamptz, ' 00:00:00'::timestamptz,'2 seconds'::interval) a(x);
-- Let's look at how much space the table occupies
# \dt+ orders
List of relations
| Owner | Size
| Description
--------+--------+-------+-------+-------+-------------
public | orders | table | thom
-- What's involved in finding the orders between 2 dates
# EXPLAIN ANALYSE SELECT count(*) FROM orders WHERE order_date BETWEEN ' 09:00:00' and ' 14:30:00';
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------
(cost=.. rows=1 width=0) (actual time=.. rows=1 loops=1)
Seq Scan on orders
(cost=0.00.. rows= width=0) (actual time=552.976 rows= loops=1)
Filter: ((order_date &= ' 09:00:00+00'::timestamp with time zone) AND (order_date &= ' 14:30:00+00'::timestamp with time zone))
Rows Removed by Filter:
Planning time: 0.140 ms
Execution time:
-- Now let's create a BRIN index on the order_date column
CREATE INDEX idx_order_date_brin
USING BRIN (order_date);
-- And see how much space it takes up
# \di+ idx_order_date_brin
List of relations
| Owner | Table
| Description
--------+---------------------+-------+-------+--------+--------+-------------
public | idx_order_date_brin | index | thom
| orders | 504 kB |
-- Now let's see how much faster the query is with this very small index
# EXPLAIN ANALYSE SELECT count(*) FROM orders WHERE order_date BETWEEN ' 09:00:00' and ' 14:30:00';
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
(cost=.. rows=1 width=0) (actual time=47.651 rows=1 loops=1)
Bitmap Heap Scan on orders
(cost=.. rows= width=0) (actual time=36.366.. rows= loops=1)
Recheck Cond: ((order_date &= ' 09:00:00+00'::timestamp with time zone) AND (order_date &= ' 14:30:00+00'::timestamp with time zone))
Rows Removed by Index Recheck: 6419
Heap Blocks: lossy=232320
Bitmap Index Scan on idx_order_date_brin
(cost=0.00.. rows= width=0) (actual time=35.567..35.567 rows=2323200 loops=1)
Index Cond: ((order_date &= ' 09:00:00+00'::timestamp with time zone) AND (order_date &= ' 14:30:00+00'::timestamp with time zone))
Planning time: 0.108 ms
Execution time:
This example is on an SSD drive, so the results would be even more pronounced on an HDD.
By default, the block size is 128 pages. This resolution can be increased or decreased using the pages_per_range:
-- Create an index with 32 pages per block
CREATE INDEX idx_order_date_brin_32
USING BRIN (order_date) WITH (pages_per_range = 32);
-- Create an index with 512 pages per block
CREATE INDEX idx_order_date_brin_512
USING BRIN (order_date) WITH (pages_per_range = 512);
The lower the pages per block, the more space the index will occupy, but the less lossy the index will be, i.e. it will need to discard fewer rows.
# \di+ idx_order_date_brin*
List of relations
| Owner | Table
| Description
--------+-------------------------+-------+-------+--------+---------+-------------
public | idx_order_date_brin
| index | thom
| orders | 504 kB
public | idx_order_date_brin_32
| index | thom
| orders | 1872 kB |
public | idx_order_date_brin_512 | index | thom
| orders | 152 kB
Foreign Table Inheritance
Foreign tables can now either inherit local tables, or be inherited from.
For example, a local table can inherit a foreign table:
-- Create a new table which inherits from the foreign table
# CREATE local_customers () inherits (remote.customers);
-- Insert some data into it
# INSERT INTO local_customers VALUES (16, 'Bruce',$$Jo's Cupcakes$$, '', '', true, 'running', 'basic');
-- And if we query the parent foreign table...
# SELECT tableoid::regclass, * FROM remote.
| id | name
| registered_date | expiry_date | active | status
| account_level
------------------+----+-------+---------------+-----------------+-------------+--------+---------+---------------
remote.customers |
1 | James | Hughson Corp
local_customers
| 16 | Bruce | Jo's Cupcakes |
| running | basic
Or a foreign table can be made to inherit from a local table:
-- Create a new table that the foreign table will be a child of
# CREATE TABLE master_customers (LIKE remote.customers);
-- Insert a new row into this table
# INSERT INTO master_customers VALUES (99, 'Jolly',$$Cineplanet$$, '', '', true, 'running', 'premium');
-- Have the foreign table inherit from the new table
# ALTER TABLE remote.customers INHERIT master_
-- Let's have a look at the contents of the new table now
# SELECT tableoid::regclass, * FROM master_
| id | name
| registered_date | expiry_date | active | status
| account_level
------------------+----+-------+---------------+-----------------+-------------+--------+---------+---------------
master_customers | 99 | Jolly | Cineplanet
| running | premium
remote.customers |
1 | James | Hughson Corp
local_customers
| 16 | Bruce | Jo's Cupcakes |
| running | basic
-- And the query plan...
# EXPLAIN ANALYSE SELECT tableoid::regclass, * FROM master_
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
(cost=0.00..140.80 rows=1012 width=145) (actual time=0.014..0.595 rows=3 loops=1)
(cost=0.00..140.80 rows=1012 width=145) (actual time=0.012..0.591 rows=3 loops=1)
Seq Scan on master_customers
(cost=0.00..1.48 rows=48 width=145) (actual time=0.012..0.013 rows=1 loops=1)
Foreign Scan on customers
(cost=100.00..124.52 rows=484 width=145) (actual time=0.567..0.567 rows=1 loops=1)
Seq Scan on local_customers
(cost=0.00..14.80 rows=480 width=145) (actual time=0.007..0.008 rows=1 loops=1)
Planning time: 0.256 ms
Execution time: 1.040 ms
GROUPING SETS, CUBE and ROLLUP
This set of features allows one to summarise data into sets.
For example, if we have this data:
# SELECT * FROM
| department |
----------+-----------------+------------+--------
| Accountant
| Project Manager | Sales
| Project Manager | Finance
| Project Manager | IT
Penelope | Manager
If we wanted to see summaries for each department, role and gender, we can use GROUPING SETS:
# SELECT department, role, gender, count(*)
FROM employees
GROUP BY GROUPING SETS (department, role, gender, ());
department |
------------+-----------------+-----------+-------
| Accountant
| Project Manager | |
Here we can see the count of employees in each department, each role, and each gender. We also get a total where all columns except count are blank.
If we wanted a count for every combination of those 3 categories, we could use CUBE:
# SELECT department, role, gender, count(*)
FROM employees
GROUP BY CUBE (department, role, gender);
department |
------------+-----------------+-----------+-------
| Accountant
| Accountant
| Project Manager | Female
| Project Manager |
| Project Manager | Male
| Project Manager |
| Project Manager | Male
| Project Manager |
| Accountant
| Accountant
| Project Manager | Female
| Project Manager | Male
| Project Manager |
So we get counts for every combination of all values. If we wanted to ensure columns are grouped in sequence, where we only summarise from the left to right, we'd use ROLLUP:
# SELECT department, role, gender, count(*)
FROM employees
GROUP BY ROLLUP (department, role, gender);
department |
------------+-----------------+-----------+-------
| Accountant
| Accountant
| Project Manager | Female
| Project Manager |
| Project Manager | Male
| Project Manager |
| Project Manager | Male
| Project Manager |
So we don't get summaries per role or per gender except when used in combination with the previous columns.
These are just basic examples. Far more complicated configurations are possible.
JSONB-modifying operators and functions
In 9.3 (and to a greater extent in 9.4), JSONB data could be extracted using various functions and operators, but nothing that could actually modify the data. As of 9.5, JSONB data can now be modified.
jsonb || jsonb (concatenate / overwrite)
The || operator allows us to combine 2 jsonb objects. If there's overlap, values are replaced on the highest level.
For example, if we want to add values to a jsonb object:
# SELECT '{&name&: &Joe&, &age&: 30}'::jsonb || '{&town&: &London&}'::
----------------------------------------------
{&age&: 30, &name&: &Joe&, &town&: &London&}
Or we can overwrite existing values:
# SELECT '{&town&: &Dataville&, &population&: 4096}'::jsonb || '{&population&: 8192}'::
-------------------------------------------
{&town&: &Dataville&, &population&: 8192}
Note that this only works on the highest level, so nested objects are replaced from the top level. For example:
# SELECT '{&name&: &Jane&, &contact&: {&phone&: &&, &mobile&: &&}}'::jsonb || '{&contact&: {&fax&: &&}}'::
------------------------------------------------------
{&name&: &Jane&, &contact&: {&fax&: &&}}
jsonb - text / int (remove key / array element)
We can remove keys from a jsonb object with the - operator:
# SELECT '{&name&: &James&, &email&: &james@localhost&}'::jsonb - 'email';
-------------------
{&name&: &James&}
Or remove values from an array (base 0):
# SELECT '[&red&,&green&,&blue&]'::jsonb - 1;
-----------------
[&red&, &blue&]
jsonb - text[] / int (remove key / array element in path)
The previous example, we can remove keys or array elements, but not any lower than the highest level, so we can provide a path to the value we want to delete using a text array. Here we'll want to remove the fax number from within the contact value:
# SELECT '{&name&: &James&, &contact&: {&phone&: &&, &fax&: &&}}'::jsonb - '{contact,fax}'::text[];
---------------------------------------------------------
{&name&: &James&, &contact&: {&phone&: &&}}
Or we can remove an array value. Here we'll get rid of the array value as index 1 (2nd value):
# SELECT '{&name&: &James&, &aliases&: [&Jamie&,&The Jamester&,&J Man&]}'::jsonb - '{aliases,1}'::text[];
--------------------------------------------------
{&name&: &James&, &aliases&: [&Jamie&, &J Man&]}
jsonb_replace function
The above lets us delete values in a path, but not update them, so we have the jsonb_replace function for that. We'll update the phone value within the contact value:
# SELECT jsonb_replace('{&name&: &James&, &contact&: {&phone&: &&, &fax&: &&}}'::jsonb, '{contact,phone}', '&&'::jsonb);
jsonb_replace
--------------------------------------------------------------------------------
{&name&: &James&, &contact&: {&fax&: &&, &phone&: &&}}
jsonb_pretty
Notice that jsonb doesn't preserve white-space, so no matter how effort you went to in order to make the object easier to read, it will end up as a long string. Well jsonb_pretty will format it for you. If we use the previous jsonb example and wrap it all in a jsonb_pretty function:
# SELECT jsonb_pretty(jsonb_replace('{&name&: &James&, &contact&: {&phone&: &&, &fax&: &&}}'::jsonb, '{contact,phone}', '&&'::jsonb));
jsonb_pretty
---------------------------------
&name&: &James&,
&contact&: {
&fax&: &&, +
&phone&: &&+
Much easier to read.
INSERT ... ON CONFLICT DO NOTHING/UPDATE
9.5 brings support for UPSERT operations with this additional syntax to the INSERT command. We can now tell INSERTs to switch to UPDATE operations if a conflict is found.
For example, if we have a simple table with user accounts logins where we wanted to track the number of times that user had logged in:
# SELECT username, logins FROM user_
username | logins
----------+--------
And we wanted to add 2 new logins, normally we'd have a problem if the primary key (or unique constraint) was violated:
# INSERT INTO user_logins (username, logins)
VALUES ('Naomi',1),('James',1);
duplicate key value violates unique constraint &users_pkey&
Key (username)=(James) already exists.
Unlike approaches using a Common Table Expression, the new command has no race conditions, guaranteeing either an insert or an update (provided there is no incidental error). The guarantee is maintained even in the event of many concurrent updates, inserts or deletes.
Example of new syntax:
# INSERT INTO user_logins (username, logins)
VALUES ('Naomi',1),('James',1)
ON CONFLICT (username)
DO UPDATE SET logins = user_logins.logins + EXCLUDED.logins;
UPSERT 0 2
Now let's look at what happened:
# SELECT username, logins FROM user_
username | logins
----------+--------
We have a new row for Naomi, which shows her having logged in once, but then we also have James whose logins value has incremented by one as specified by the UPDATE part of the statement. The UPDATE statement knows which rows it's updating based on the column or unique constraint that's being checked against.
Of course there are scenarios where you might want to insert a value into a table, but only if it's not there already.
Say we had a list of countries which would be used to constrain values in other tables:
# SELECT * FROM
-----------
We want to add 2 more countries. If one or more of them already existed and violated the primary key (in this case the &country& column), we'd get an error:
# INSERT INTO countries (country) VALUES ('France'),('Japan');
duplicate key value violates unique constraint &countries_pkey&
Key (country)=(Japan) already exists.
But now we can tell it that a conflict is fine, and just DO NOTHING in those scenarios:
# INSERT INTO countries (country) VALUES ('France'),('Japan') ON CONFLICT DO NOTHING;
INSERT 0 1
Now we should just have one additional country in our table:
# SELECT * FROM
-----------
If there were additional columns, also with unique constraints on, we could specify the constraint or column that we want to apply the condition to, so that a real conflict on another column produces an error.
So we could have phrased that last example as:
# INSERT INTO countries (country) VALUES ('France'),('Japan') ON CONFLICT ON CONSTRAINT countries_pkey DO NOTHING;
# INSERT INTO countries (country) VALUES ('France'),('Japan') ON CONFLICT (country) DO NOTHING;
Note that providing multiple sets of conflict/update conditions isn't yet supported, so if a specific conflict is specified, but another conflict occurs instead, it will produce a conflict error like it would with a normal insert.
pg_rewind makes it possible to efficiently bring an old primary in sync with a new primary without having to perform a full base backup. This works by looking in the Write Ahead Log to see which pages have been modified, and only copying across those pages.
In this example, we have a primary (running on port 5530) and a standby subscribing to it (on port 5531):
# SELECT * FROM pg_stat_
-[ RECORD 1 ]----+------------------------------
| rep_user
application_name | standby1
client_addr
| 127.0.0.1
client_hostname
client_port
backend_start
backend_xmin
| streaming
sent_location
| 0/C81BB40
write_location
| 0/C81BB40
flush_location
| 0/C81BB40
replay_location
| 0/C81BB40
sync_priority
sync_state
Now we'll promote the standby:
$ pg_ctl promote -D standby1
server promoting
And we'll make some changes on this instance:
$ psql -p 5531 postgres
# CREATE TABLE x (content text);
CREATE TABLE
# INSERT INTO x SELECT 'test' FROM generate_series(1,1000);
INSERT 0 1000
Now we'll stop old primary and use pg_rewind to re-synchronise it:
$ pg_ctl stop -D primary
waiting for server to shut down.... done
server stopped
$ pg_rewind -D primary --source-server='host=localhost port=5531' -P
connected to remote server
The servers diverged at WAL position 0/C81BB40 on timeline 1.
Rewinding from last common checkpoint at 0/2000060 on timeline 1
reading source file list
reading target file list
reading WAL in target
Need to copy 274 MB (total source directory size is 290 MB)
142 kB (100%) copied
creating backup label and updating control file
And we'll make some changes to get it to subscribe to the new primary:
$ cd primary
$ mv recovery.{done,conf}
$ vi recovery.conf
# edited to set host info to point to port 5531 in this case
$ vi postgresql.conf # as our example instances are running on the same server, we'll just change the port so it doesn't conflict
Then start the new standby (old primary):
$ pg_ctl start -D primary
Let's see if it's successfully caught up:
$ psql -p 5531 postgres
# connect to the new primary
# SELECT * FROM pg_stat_
-[ RECORD 1 ]----+------------------------------
| rep_user
application_name | standby1
client_addr
| 127.0.0.1
client_hostname
client_port
backend_start
backend_xmin
| streaming
sent_location
| 0/C8559B0
write_location
| 0/C8559B0
flush_location
| 0/C8559B0
replay_location
| 0/C855978
sync_priority
sync_state
And see if the test data from the new primary is on the new standby:
$ psql -p 5530 postgres
# connect to the new standby
# SELECT COUNT(*) FROM
All synchronised.
Other new features
ALTER TABLE ... SET LOGGED / UNLOGGED
PostgreSQL allows one to create tables which aren't written to the Write Ahead Log, meaning they aren't replicated or crash-safe, but also don't have the associated overhead, so are good for data that doesn't need the guarantees of regular tables. But if you decided an unlogged table should now be replicated, or a regular table should no longer be logged, you'd previously have to create a new copy of the table and copy the data across. But in 9.5, you can switch between logged and unlogged using a new command:
Set an unlogged table to logged:
ALTER TABLE &tablename& SET LOGGED;
Set a logged table to unlogged:
ALTER TABLE &tablename& SET UNLOGGED;
For example:
# CREATE UNLOGGED TABLE messages (id int PRIMARY KEY, message text);
# SELECT relname,
CASE relpersistence
WHEN 'u' THEN 'unlogged'
WHEN 'p' then 'logged'
ELSE 'unknown' END AS table_type
FROM pg_class
WHERE relname ~ 'messages*';
| table_type
---------------+------------
| unlogged
messages_pkey | unlogged
Note that setting an unlogged table to logged will generate WAL which will contain all data in the table, so this would cause a spike in replication traffic for large tables. And now we change it to a logged table:
# ALTER TABLE messages SET LOGGED;
And the result of the previous query is now:
| table_type
---------------+------------
messages_pkey | logged
SKIP LOCKED
Parallel VACUUMing
The vacuumdb utility now supports parallel jobs. This is specified with the -j option, just like when using pg_dump or pg_restore. This means vacuuming a database will complete a lot quicker, and especially so for cases where tables are spread across multiple tablespaces. It will also start vacuuming the largest relations first.
For example:
vacuumdb -j4 productiondb
This would vacuum the database named &productiondb& by spawning 4 vacuum jobs to run simultaneously.
Abbreviated Keys
The abbreviated keys optimization can be expected to greatly enhance the performance of sorts in PostgreSQL, including those used for CREATE INDEX. Reportedly, in some cases, CREATE INDEX on text columns can be as much as an entire order of magnitude faster. Numeric sorts also support the optimization.
GiST Index-Only Scans
Previously, the only index access method that supported index-only scans was B-Tree and SP-GiST, but support is added for GiST in PostgreSQL 9.5:
In this example, we'll be using the btree_gist extension:
# CREATE EXTENSION btree_
We'll set up a simple table that stores meeting room reservations:
# CREATE TABLE meetings (
id serial primary key,
reservation tstzrange);
Then we add an exclusion constraint to ensure that no booking for any one room overlaps another booking for the same room, which creates an index to enforce the constraint:
# ALTER TABLE meetings
ADD CONSTRAINT meeting_exclusion
EXCLUDE USING GIST (room with =, reservation with &&);
And we'll populate it with lots of test data:
# WITH RECURSIVE ins AS (
1 AS room,
' 08:00:00'::timestamptz AS reservation_start,
(ceil(random()*24)*5 || ' minutes')::interval as duration
WHEN ins.reservation_start & now() THEN ins.room + 1
ELSE ins.room
END AS room,
WHEN ins.reservation_start & now() THEN ' 08:00:00'::timestamptz
ELSE ins.reservation_start + ins.duration
END AS duration,
(ceil(random()*16)*15 || ' minutes')::interval AS duration
WHERE reservation_start & now() + '1 day'::interval
and room &= 200
INSERT INTO meetings (room, reservation)
SELECT room, tstzrange(reservation_start, reservation_start + duration) FROM ins
WHERE (reservation_start + duration)::time between '08:00' and '20:00';
One run of this results in 6.4 million rows.
If we get the query plan for counting how many meetings occurred during May in 2014 for each room:
# EXPLAIN SELECT room, count(*) FROM meetings WHERE reservation && '[,]'::tstzrange GROUP BY room ORDER BY
QUERY PLAN
-------------------------------------------------------------------------------------------------------------
(cost=94.21 rows=2 width=4)
Sort Key: room
HashAggregate
(cost=94.19 rows=2 width=4)
Group Key: room
Index Only Scan using meeting_exclusion on meetings
(cost=0.41..1113.98 rows=36038 width=4)
Index Cond: (reservation && '[& 00:00:00+01&,& 00:00:00+01&]'::tstzrange)
Prior to 9.5, we would get the following plan:
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
(cost=570.16 rows=2 width=4)
Sort Key: room
HashAggregate
(cost=570.14 rows=2 width=4)
Group Key: room
Bitmap Heap Scan on meetings
(cost=778.49..28386.07 rows=36810 width=4)
Recheck Cond: (reservation && '[& 00:00:00+01&,& 00:00:00+01&]'::tstzrange)
Bitmap Index Scan on meeting_exclusion
(cost=0.00..769.29 rows=36810 width=0)
Index Cond: (reservation && '[& 00:00:00+01&,& 00:00:00+01&]'::tstzrange)
已发表评论数()
请填写推刊名
描述不能大于100个字符!
权限设置: 公开
仅自己可见
正文不准确
标题不准确
排版有问题
没有分页内容
图片无法显示
视频无法显示
与原文不一致

我要回帖

更多关于 postgresql 9.5 的文章

 

随机推荐