Saturday, February 24, 2018
Home » Amazon Web Services » AWS Quarterly and Cloud Database Updates

AWS Quarterly and Cloud Database Updates

AWS has been plenty busy lately. They just hosted their AWS Summit in Santa Clara, and launched a webinar series covering numerous product updates – particularly on AWS cloud database products – in their quarterly update for AWS.

As an AWS Premier Consulting Partner, we stay up to date on all things AWS. I’d like to share a few of the highlights from their quarterly update, and look ahead at the rest of the year.

For starters, what exactly is the AWS Quarterly Update? From AWS itself:

“With a high pace of innovation and features and services launched daily, it can be hard to keep up with what’s new from AWS. The AWS Quarterly Update is a new way to catch up and review the biggest updates from the past quarter. You will have the opportunity to ask Jeff Barr questions during the live Q&A session following the presentation.”

With that in mind, there were a lot of great updates from Q2. The AWS Database Migration Service now has support for additional database platforms. The new AWS Schema Conversion Tool saves development time when migrating Oracle and SQL Server database schemas to their MySQL, MariaDB, and Aurora equivalents. That includes PL/SQL and T-SQL procedural code, which is immensely helpful in efficiency. And AWS users only pay for the compute resources used during the migration process, plus any additional log storage.

Additionally, Amazon Redshift is seeing a new burst of speed – it improves throughput up to 2x due to improved memory management, with performance improvements for queries that stress the network, as well as improved backup performance. UNION ALL queries and VACUUM commands now run up to 10x faster. On top of all that, there are open source tools available via Github for admin scripts, view, and encoding utilities.

Redshift also now offers the ability to create new data sources in Amazon machine learning. AWS covered Redshift during the Santa Clara Summit, including how it uses columnar data storage, keeping data in columns instead of rows. That’s ideal for data warehousing and analytics – since only the columns involved in the queries are processed and columnar data is stored sequentially on the storage media, these kinds of systems require far fewer I/Os, which greatly improves query performance.

Of course, Amazon Aurora played a big part during the AWS Summit too. We saw a new view of Aurora clusters on the RDS console, and watched Amazon review several SQL benchmark test results while discussing best practices. Aurora allows users to share DB snapshots across accounts, and offers Cross Region replication. Snapshots can be shared with other accounts or made public and are used to restore the database to an Aurora instance running in a separate AWS account that’s in the same Region as in the snapshot. Users can then interact with the device via the console. With Cross Region replication, users create cross-region read replicas for Amazon Aurora. This is useful not just in development, but also with disaster recovery. It’s always a good idea to have strong DR capabilities, and with Aurora, that’s entirely possible.

Amazon also recently released a new white paper on Getting Started with Aurora to help clients understand the benefits and walk through the steps required to create and connect to an Aurora database. And keeping that momentum, this week Amazon announced, through their blog, a new feature for customers with the ability to create a snapshot backup of their existing MySQL database and upload that to Amazon S3 and create an Amazon Aurora cluster – a migration process that keeps applications up and running.

While database was the name of the game at AWS Summit in Santa Clara, there were other big announcements, too. AWS expanded its footprint for enterprise clients. Here are a few non-DB highlights:

  • AWS S3 Transfer acceleration – Using a highly optimized network bridge while eliminating gateway servers, firewalls, special protocols/clients or upfront fees, upload speed across regions are 500 percent faster in some cases. This is a simple, efficient way to quickly move on-premises data to the cloud.
  • AWS Import/Export Snowball – Users can now export large amounts of data from AWS, using one or more S3 buckets.
  • Amazon Route 53 added metric-based health checks, DNS failover for private hosted zones, and configurable health check locations
  • AWS CloudWatch events are now supported in AWS CloudFormation templates
  • Amazon EC2 Container Service (ECS) supports automatic service scaling, and the Amazon EC2 run command adds document sharing. X1 instances for Amazon EC2 has the most memory in any SAP-certified cloud instance, with 10GB/second of dedicated bandwidth. It’s ideal for running in-memory databases, big data processing engines, and high-performance computing workloads

The first half of the year was exciting for AWS, and I have no doubts there will be even more great products and services down the line. We’re looking forward to continuing to support AWS as a Premier Consulting Partner. To attend a future AWS Summit, check out their Summits page.

About David Lucky

David Lucky
As Datapipe’s Director of Product Management, David has unique insight into the latest product developments for private, public, and hybrid cloud platforms and a keen understanding of industry trends and their impact on business development. David writes about a wide variety of topics including security and compliance, AWS, Microsoft, and business strategy.

Check Also

Digital Transformation at AWS Summit Chicago 2017

Last month, 11,000 people gathered at the McCormick Place in Chicago, Illinois for the annual AWS Summit Chicago. Attendees came together to learn about AWS’ latest service offerings and client use cases during the more than 40 technical sessions available to choose from over the course of the two-day event. As a longtime AWS Premier Consulting Partner, Datapipe also attended the event. Check out our recap of the conference.