Amazon RDS now supports PostgreSQL 9.6.1
postgresql.orgI'm fairly impressed by RDS's turn around time. 9.6 dropped in late September and they're already supporting it in just over a month. I wish Google Cloud SQL supported Postgres at all. Sad that there's limited competition in managed postgres space (most notable competitor being Heroku).
Yeah Google Cloud SQL's lack of Postgres support really stings. Would be great to see them branch out a bit.
I suppose they have more experience with MySQL given that Youtube runs on it...
It looks like it's pretty clear recently that Postgres support in some form from Google is coming - https://code.google.com/p/googlecloudsql/issues/detail?id=96
Umm... the only signal on that issue is that it got changed to "accepted" recently without any comment about ETA. I've seen issues sit in that state for quite some time, so that doesn't look all that encouraging or clear to me.
That said, thanks for the link; I've voted for it and will keep an eye out there for future updates.
Meanwhile, SQL Server on RDS doesn't support increasing the data allocation. You can't even take a snapshot and restore to a new, larger, RDS instance. Oh and by the way, SQL Server has been "supported" since 2012.
Obviously it's not their focus, but this is a fundamental selling point to cloud DB's.
That's pretty lame. Do you happen to know if you can you create an AAG with a larger volume on the replica and cut it over? I don't even remember if that's possible with your own SQL Server AAG, it's been a while. At any rate, thanks for commenting about this. I'm going to need to look into this at some point in the near future so your timing was fortuitous.
We had this issue. Luckily they recently added native backups to s3 so we had 3 hours of downtime to backup a 180gb db and restore to a new 500gb instance.
Jesus. I'm no strategy genius but sometimes it seems like they want to cede the Windows market to Azure. Some of the Windows stuff in AWS seems kind of "me too".
Is the windows server market with fighting for? Ms makes the most profit no matter who runs the hardware.
That's a fair point. And if it really came down to it they could (and arguably do with things like SQL server) make the licensing terms less agreeable for other cloud operators.
In short, I don't know. However, I highly doubt it. Most of these features/permissions are disabled when you use RDS. For example you can't take a snapshot through SSMS, you have to use the AWS console.
I'm assuming MS SQL? Isn't much cheaper to run it on an instance yourself if you have company licenses? And I suspect people who don't already have licenses run postgres or mysql.
Yep, MS SQL Server. I think that's something you have to determine on a case-by-case basis. If you have Software Assurance you can run it on any instance but if you don't you have to pay Amazon for dedicated hardware. As usual Microsoft's enterprise licensing is pretty much shit. You might also incur more operations overhead than you can afford by having to deal with patching, outages, performance degradation, etc. And your management might want to engage in a slide deck circle jerk with animations about how modern and in the cloud you are by using RDS so that's worth throwing a few $30K+SA enterprise proc licenses away.
Boy do I wish they had the same turnaround time on AWS Lambda Python versions. Still stuck on 2.7 :(
At least you have a supported language! (Ruby dev)
I managed to run mruby on Lambda. You build the executable on the same Linux distro used by Lambda and you upload it in a zip. Then use node or python to run it. More of a gimmick than of a real thing. Anyway, IMHO given the 100 ms billing units the future for Lambda could be compiled languages, as soon as they'll let us run executables without a node or python wrapper. Maybe the tradeoff between programmer time and runtime costs will be different than for traditional deployments.
There are many options for hosted PostgreSQL.
See for example this staggering list of hosting providers in Europe: https://www.postgresql.org/support/professional_hosting/euro...
Yes, most of those are smaller companies. But some of these companies are directly involved in the development of PostgreSQL (just look at the PostgreSQL-hackers mailing list), so they should really know what they are doing!
Do people really talk to databases over WAN (which means relatively high latency)? I was under the impression this was a bad idea.
Many of the hosting providers provide PG as a service in multiple clouds / hosting providers allowing you to run your applications on your own VMs in the same cloud. So while you would be using the external IP addresses to communicate with the database it'd still be in the same datacenter.
Also, as pointed out in the parent post, some of the hosting providers such as us (Aiven, https://aiven.io) are directly involved in PostgreSQL development and one of the bugfixes in the latest releases (9.6.1 and 9.5.5) that just came out was contributed by us.
Running within the same cloud provider makes sense, I hadn't thought of that.
It depends on the type of application and the requirements and expectations of its users. I wouldn't recommend having a web app talk to its database over a WAN, for example. Unless it had a decent caching layer in between.
That's a great point. I've heard of some smaller companies (compose, elephantsql) but to be honest, I have definitely not considered them. The reason is that I just want all inclusive solution and I don't have much context for these companies. How healthy they are, what's the likelyhood of them going out of business, etc.
I think you bring up a good point. I'm going to put aside my bias and give a smaller company a shot. Thanks for sharing.
Compose.io is owned by IBM. I had a chance to meet both the Compose team and a few people from IBM at the DataLayer conference. They are passionate and smart.
Compose is in a sweet spot right now. They run with the speed of a startup, but have access to IBM's massive resources. Really the best of both worlds.
Compose (an IBM company) does managed Postgres.
There are multiple DB-as-a-Service services that provide PostgreSQL in Google Cloud. My company, Aiven (https://aiven.io) is one of the providers and I believe DatabaseLabs and ElephantSQL also provide managed PG in Google Cloud.
My company, Database Labs, runs Postgres as a service in Google Cloud -- https://www.databaselabs.io/pricing/#google-cloud
i wrote about why i thought citus was the best a week ago: http://blog.faraday.io/3-reasons-citus-is-best-non-heroku-po...
Heroku isn't a competitor to AWS, AFAIK. They're a partner.
In this context it's useful to note that RDS instance pricing is ~50% higher than the pricing for the underlying EC2 instances. That 50% reflects the (pure software) value-add over EC2.
When someone like Heroku offers a competing product, the fact that the product is also being run on EC2 is only a part of the overall story. Over time, AWS will probably be less and less happy with earning just the bare infrastructure dollars for such use.
> In this context it's useful to note that RDS instance pricing is ~50% higher than the pricing for the underlying EC2 instances. That 50% reflects the (pure software) value-add over EC2.
Just build a HA RDS Cluster yourself with Click and Scale, than you can sell it to me with ~25% higher prices than the EC2 instances. Btw. AWS RDS is cheaper than most other competitors like ElephantSQL.
Just stating a fact about their pricing (in the context of whether Heroku is a competitor or not). Not judging the relative value of the offerings at all. I'm an RDS user, FWIW.
Sorry, I'm not sure what you mean. It used to be the case that Heroku is/was built on EC2. They also offered a managed database (Heroku Postgres) built on EC2. Do you mean they're a partner because they're building a product on top of EC2?
And is it HIPAA certified yet? Because the lack of that is what made me spend a week porting a project to use RDS/MySQL just a little while ago. We've been told it's coming Real Soon Now since last year.
I wanted to use PostgreSQL for a whole host of reasons, but not so much that we wanted to certify our own instances.
Another option is to use Aptible: https://www.aptible.com/
It's basically (a better) Heroku with an emphasis on enabling HIPAA/HITECH compliance. They do both app and DB hosting (including Postgres). And it's on AWS so integrates easily with existing infrastructure/code.
Disclaimer: Biased as I know the founders and we use the product. But they are good people and it's a good product!
But aptible doesn't have managed postgres. I still have to do all the work of setting up a database and managing backups, upgrades, etc. The only thing aptible brings to the table is single tenant hardware and encryption by default. I can just get that from Amazon. Am I missing something about aptible? I don't understand the attraction although it does look fancy
Healthcare Blocks is another option: https://www.healthcareblocks.com
Aptible absolutely has managed postgres, and it's great. I'm using it at this very second.
Where can I find out more? All I see on their marketing site is "database containers" which I interpreted as little more than a container that happens to run a database, not as something that manages backups, point-in-time restoration, multi-az failover, upgrades, and all the other things that RDS has built in that many folks would otherwise prefer to have an (expensive) postgres specialist setup and manage for you.
Dang, sorry about that: https://www.aptible.com/enclave/
That page needs some love, thank you for the solid comment. I think the features we list cover your comments, but let me know if you have specific questions or other features you'd like to see!
Source: am Aptible CEO
Although dirty, you could still technically use Postgres I believe, you would just need to run it on EC2 and EBS instead. The covered services for the BAA include EC2 and EBS, so you would still be covered under the signed BAA with Amazon. When it is eventually covered under the BAA, you could port over to Postgres RDS.
Does anyone else have abysmal IO performance on postgres RDS? I have a 200 gb provisioned ssd with 2000 iops and get abysmal bulk read performance - the panel will report 30 mb/s and query speed is really slow with disk being the bottleneck
I'm using RDS MySQL and have never been able to utilize more than ~25% of the provisioned IOPS or more than 100MB/s transfer rate. Also, as best as I can tell, EBS latency is pretty high so making lots of small, serial database calls is a lot slower than it would be on local disks.
Any indexes on the table? Are you using COPY, INSERT INTO or /COPY via PSQL?
IIRC there are various tricks you used to have to perform to get it to strip across different disks in the backend. Something along the lines of allocating 20% of your storage and then increasing it 5 times until you had what you wanted. If you provisioned a server immediately to full capacity you'd end up with your data too close together which degraded performance.
That's something I heard that used to be true but not sure whether it's still required with upgrades to S3 over the years...
Interesting, I've never heard of striping tricks like that. Do you have any links on that? And I suppose that could be applicable to EBS since it rather than S3 is the backing store for RDS.
Sorry I don't have links. But yes, I think it was primarily to improve the performance of EBS, not S3 (been a while since I had to work on that sort of thing).
Now they just need to support the Foreign Data Wrapper extension with egress connections. Really wish that was built in because FDW itself is amazing, and AWS RDS is easy to manage.
Agree 100%. Would be awesome if you could have RDS with FDW connected to a Redshift instance. (don't know if that's possible given the version of Postgres upon which Redshift is based)
It is definitely doable. You can refer to this blog post by AWS (https://aws.amazon.com/blogs/big-data/join-amazon-redshift-a...) to set up FDW to Redshift.
What is more exciting is you can leverage Redshift MPP architecture with this method.
You can definitely do FDWs to Other Postgres instances, and since I think you can just point the same wrapper to redshift and voila.
This happened a couple days ago, noticed when spinning up a new RDS instance.
I'm super excited about it. It's great to have more modern managed services on AWS especially since this brings Postgres out of the Stone Age. Lots of good JSON support added in 9.5 and 9.6.
Previously only 9.4 was available.
9.5 was actually added to RDS back in April [0]. There was some serious bugs found that made them wait for 9.5.2 before adding it [1]. As a customer of RDS, I'm really happy that they're taking their time to thoroughly test the releases.
[0] https://aws.amazon.com/about-aws/whats-new/2016/04/rds-postg... [1] https://forums.aws.amazon.com/thread.jspa?threadID=240278
Yeah they only added 9.5 a couple weeks ago, very recently.
That's simply not true. We've been running with Postgresql 9.5 since May.
Apparently it was added in April: https://aws.amazon.com/about-aws/whats-new/2016/04/rds-postg...
I think he meant 9.6 which would make more sense.
I upgraded - took <10mins and also upgraded instance types. One slow query (that parallelizes) is 3x faster with no SQL changes or new indices.
On a slightly tangential topic, what is Amazon doing with aurora?
Aurora supports MySQL so far, heard lots of good things about it. Hopefully they'll put Postgres on it soon.
So far good things IMO. Released a new version recently, the performance has been outstanding.
What's the performance of Amazon RDS like with Postgres? (Curious, never having used it).
For small to medium volume apps it's quite good. The security configuration out of the box is quite good. All standard options like changing pgsql port, opening it up to only your main dB instance is supported.
If needs once a week downtime for half an hour for patches. But if you use multi AZ deployment with 2 instances it does this without any actual downtime. Automatically manages the failover for you.
Provisioned IOPS are very expensive. For small loads use bigger general purposes SSD say 100 to 200gb and it works OK. The IOPS are burstable so it works out OK.
I don't know about RDS psql, but RDS mysql multi-AZ does have up to 2 mins downtime when switching masters. We see that when scaling up our staging for loadtesting.
I agree on the IOPS - just get a larger disk (min 100G) even if you'll never use it. Storage is cheap. It's supposed to give you 3k IOPS but doesn't actually give you anywhere near that (but it is still fast). If you do need guaranteed IOPS, then it's wallet-opening time.
I tested 9.4 ages ago by polling a table. Upgraded the instance type while in multi az. The down time was about ~14 seconds.
With SQL server it was about ~50seconds.
2 minutes seems like a Long time but I'm curious now and want to test this again and maybe blog about it.
I haven't tried for a while - my method for outage detection was simply trying to make a connection. It wasn't always 2 min, but usually somewhere between 1-2 min (never more, though).
> If needs once a week downtime for half an hour for patches.
Half an hour downtime once a week? This sounds bad.
I checked the FAQ once again: https://aws.amazon.com/rds/faqs/
Here are the exact words from Amazon:
> The Amazon RDS maintenance window is your opportunity to control when DB Instance modifications (such as scaling DB Instance class) and software patching occur, in the event they are requested or required. If a maintenance event is scheduled for a given week, it will be initiated and completed at some point during the maintenance window you identify. Maintenance windows are 30 minutes in duration. The only maintenance events that require Amazon RDS to take your DB Instance offline are scale compute operations (which generally take only a few minutes from start-to-finish) or required software patching. Required patching is automatically scheduled only for patches that are security and durability related. Such patching occurs infrequently (typically once every few months) and should seldom require more than a fraction of your maintenance window. If you do not specify a preferred weekly maintenance window when creating your DB Instance, a 30 minute default value is assigned. If you wish to modify when maintenance is performed on your behalf, you can do so by modifying your DB Instance in the AWS Management Console, the ModifyDBInstance API or the modify-db-instance command. Each of your DB Instances can have different preferred maintenance windows, if you so choose.
Running your DB Instance as a Multi-AZ deployment can further reduce the impact of a maintenance event. Please refer to the Amazon RDS User Guide for more information on maintenance operations.
They have a window for 30 minutes per week, but in practice it is used less than once a year.
You max out at 30,000 IOPS so about 1/3 of consumer grade SSD or 1/10 of a decent PCIE SSD.
It's frustrating that AWS doesn't offer something ideal for databases. Databases should be run with fast local storage, on reliable dedicated hardware. VM instances on shared hardware that may be retired at short notice aren't ideal.
We (databaselabs.io) did some customer dev tests for that exact product (databases with fast local hardware storage.) We found that almost all customers didn't care about getting that level of performance (enough to pay for it, anyway.) For ~99%, cloud-level shared storage is all they'll pay for. Finding the other 1% is a very expensive long-cycle sales and marketing proposition, so we decided to market cloud solutions to the 99%.
That said, we are equipped to run the 'fast dedicated hardware' solution (we come from a managed services background) -- just write me, pjlegato at databaselabs.io, and I'll get you set up.
x and i instances.
It's basically what you get from a well configured EC2 instance.
You pay for the underlying instances + storage provisioning, so performance is price-configurable.
it's fine if you can leave with not doing any DBA stuff
Out of interest, do Amazon send anything upstream to the pg team?
How does this compare to Heroku PostgreSQL?
It seems to be that it's cheaper but it's hard to compare without knowing what the Heroku instances are.
are there any other companies out there that provide hosting with multi-regional failovers? i love RDS and it's fine for mission critical stuffs but sometimes i just don't want to fork so much money on smaller projects and need an alternative.
Anyone using Amazon RDS in production and satisfied?
I wish Azure would launch hosted Postgres.
Does this indicate if it will include a REST API to build some quick applications off of or is it easier to return a JSON from a query?
I dont think so but you might look at https://www.postgresql.org/about/news/1616/https://www.postg...