At AWS re:Invent, Amazon Web Services, Inc. (AWS) announced three new serverless
innovations across its database and analytics portfolio that make it
faster and easier for customers to scale their data infrastructure to
support their most demanding use cases. Today's announcement introduces
Amazon Aurora Limitless Database, a new capability that automatically
scales beyond the write limits of a single Amazon Aurora database,
making it easy for developers to scale their applications and saving
them months compared to building custom solutions. Additionally, Amazon
ElastiCache Serverless helps customers create highly available caches in
under a minute and instantly scales vertically and horizontally to
support customers' most demanding applications, without needing to
manage the infrastructure. AWS is also releasing a new Amazon Redshift
Serverless capability that uses artificial intelligence (AI) to predict
workloads and automatically scale and optimize resources to help
customers meet their price-performance targets. These announcements
build on AWS's pioneering work with serverless technologies to help
customers manage data at any scale and dramatically simplify their
operations, so they can focus on innovating for their end users-without
spending time and effort provisioning, managing, and scaling their data
infrastructure. To learn more about unlocking the value of data using
AWS, visit
aws.amazon.com/data.
"Since its earliest days, AWS has focused on removing undifferentiated
heavy lifting for customers, and we have continued to build on that
legacy through serverless offerings that dramatically simplify what it
takes to build, run, and manage applications at scale," said Dr. Swami
Sivasubramanian, vice president of Data and Artificial Intelligence at
AWS. "Data is the cornerstone of every organization's digital
transformation, and harnessing data to its full potential requires an
end-to-end strategy that can scale with a customer's needs while
accommodating all types of use cases. The dynamic nature of data makes
it perfectly suited to serverless technologies, which is why AWS offers a
broad range of serverless database and analytics offerings that help
support our customers' most demanding workloads. The new serverless
innovations announced today build on this foundation to make it easier
for customers to scale to millions of transactions per second, quickly
add capacity at a moment's notice, and dynamically adapt workload
patterns to optimize for performance and cost."
Organizations create and store petabytes of data from a growing number
of sources. To get the most value out of this data, these companies need
an end-to-end strategy that can help them analyze and manage the data
at any scale. Many AWS customers are already using a wide variety of
purpose-built data services to support their most critical applications
and make data-driven decisions, including Amazon Aurora for relational
databases, Amazon ElastiCache for running in-memory caches, and Amazon
Redshift for data warehousing. These services remove much of the heavy
lifting that customers have to go through if they run their own database
and analytics solutions, allowing them to focus on creating
differentiated experiences for their end users. AWS continues to
simplify operations for customers by releasing serverless technologies
across its service portfolio, from some of AWS's earliest offerings like
Amazon Simple Storage Service (Amazon S3) to pioneering serverless,
event-driven computing with AWS Lambda. Today, AWS offers the broadest
set of serverless data analytics offerings in the cloud, making it easy
for customers to take advantage of benefits like automatic provisioning,
on-demand scaling, and pay-for-use pricing while using the right tool
for the job. The new innovations announced today further AWS's
commitment to reimagining its database and analytics portfolio through
serverless technologies, by making it even easier for customers to
optimize costs and maximize their data's value.
Amazon Aurora Limitless Database powers petabyte-scale applications with millions of writes per second
Today, hundreds of thousands of customers use Amazon Aurora, a fully
managed MySQL- and PostgreSQL-compatible relational database that
provides the performance and availability of commercial databases at up
to one-tenth the cost. These organizations rely on Amazon Aurora
Serverless v2 to power their applications because it is capable of
scaling to support hundreds of thousands of transactions in a fraction
of a second. As it scales, it adjusts capacity up and down in
fine-grained increments to provide the right amount of database
resources for the application. However, there are some use cases, such
as online gaming and financial transaction processing, with workloads
that need to process and manage hundreds of millions of global users,
handle millions of transactions, and store petabytes of data. Today,
these organizations must scale horizontally by splitting data into
smaller subsets and distributing them across multiple distinct database
instances in a process known as "sharding," which requires months-or
even years-of upfront developer effort to build custom software that
routes requests to the correct instance or makes changes across multiple
instances. Organizations also need to continuously monitor database
activity and adjust capacity, which can be time-consuming and impact
availability. The ongoing maintenance effort for these workloads is
high, as organizations need to coordinate routine maintenance
operations-such as adding a column to a table, taking consistent backups
across all compute instances, or applying upgrades and patches-and
constantly tune and balance the load across multiple instances. As a
result, organizations need ways to automatically scale their
applications beyond the limits of a single database without spending
time building their scaling solutions.
Amazon Aurora Limitless Database scales to millions of write
transactions per second and manages petabytes of data while maintaining
the simplicity of operating inside a single database. Amazon Aurora
Limitless Database automatically distributes data and queries across
multiple Amazon Aurora Serverless instances based on a customer's data
model, eliminating the need to build custom software to route requests
across instances. As compute or storage requirements increase, Amazon
Aurora Limitless Database automatically scales resources vertically
within serverless instances and horizontally across instances to meet
workload demand, providing customers with consistently high performance
while saving them months or years of effort in building custom software
to scale their databases. Maintenance operations and changes can be made
in a single database and automatically applied across instances,
eliminating the need for managing routine tasks across dozens, or even
hundreds, of database instances manually.
Amazon ElastiCache Serverless makes it faster and easier to create a
cache and instantly scale to meet application demand-without needing to
provision, plan for, or manage capacity
Organizations building applications store frequently accessed data in
caches to improve application response times and reduce database costs.
These customers use open source, in-memory data stores like Redis and
Memcached for caching because of their high performance and scalability.
To simplify the process of building and running a cache, AWS offers
Amazon ElastiCache, a fully managed Redis- and Memcached-compatible
service that is used by hundreds of thousands of customers today for
real-time, cost-optimized performance. Today, Amazon ElastiCache scales
to hundreds of terabytes of data and hundreds of millions of operations
per second with microsecond response times, and organizations use it to
deploy highly available, mission-critical applications across multiple
Availability Zones. While many organizations appreciate the fine-grained
configuration options Amazon ElastiCache offers, some companies
building a new application or migrating existing workloads want to get
started quickly without designing and provisioning cache infrastructure,
a process that requires specialized expertise and deep familiarity with
application traffic patterns. Organizations also need to constantly
monitor and scale their capacity to maintain consistently high
performance, or overprovision for peak capacity, which results in excess
costs. As a result, they need a solution that can help them manage the
underlying infrastructure, making it faster and easier to create and
operate a cache.
With Amazon ElastiCache Serverless, customers can now create a highly
available cache in under a minute without infrastructure provisioning or
configuration. Amazon ElastiCache Serverless eliminates the complex,
time-consuming process of capacity planning by continuously monitoring a
cache's compute, memory, and network utilization and instantly scaling
vertically and horizontally to meet demand without downtime or
performance degradation. With Amazon ElastiCache Serverless, customers
no longer need to rightsize or fine-tune their caches. Amazon
ElastiCache Serverless automatically replicates data across multiple
Availability Zones and provides customers with 99.99% availability for
all workloads. Customers only pay for the data they store and the
compute their application uses. Amazon ElastiCache Serverless is
generally available today for both Redis- and Memcached-compatible
deployment options. To get started, visit aws.amazon.com/elasticache/features/#Serverless.
Next-generation, AI-driven scaling and optimizations in Amazon
Redshift Serverless deliver better price-performance for variable
workloads
Tens of thousands of customers collectively process exabytes of data
with Amazon Redshift every day. Many of these customers rely on Amazon
Redshift Serverless, which automatically provisions and scales data
warehouse capacity to meet demand based on the number of concurrent
queries. While customers enjoy the ease of running analytics workloads
of all sizes on Amazon Redshift Serverless without needing to manage
data warehouse infrastructure, they would benefit further from the
ability to easily adapt to changes in their workloads along additional
dimensions, such as the amount of data or query complexity, to achieve
consistently high performance while optimizing cost. For example, an
organization with normally predictable dashboarding workloads may find
that a new regulatory reporting requirement means they need to ingest
substantially more data and handle more intensive, complex queries. To
address workload changes along all dimensions, while ensuring consistent
performance and without disrupting existing workloads, an experienced
database administrator would have to spend hours separating the
additional workload to a different data warehouse or making multiple,
complex manual adjustments. This includes temporarily increasing the
resources for data ingestion and new query workloads, pre-computing
results for quick data access, organizing data for efficient retrieval,
and timing data warehouse management tasks. All of these optimizations
need to be done continuously, while managing each individual
organization's priorities for balancing performance and cost, regardless
of changes in data volume, query complexity, or more concurrent
queries.
With the new AI-driven scaling and optimizations, Amazon Redshift
Serverless automatically scales resources up and down across multiple
workload dimensions and performs optimizations to meet price-performance
targets. Amazon Redshift Serverless uses AI to learn customer workload
patterns along dimensions such as query complexity, data size, and
frequency and continuously adjusts capacity based on those dynamic
patterns to meet customer-specified, price-performance targets. Amazon
Redshift Serverless now also proactively adjusts resources based on
those customer workload patterns. For example, Amazon Redshift
Serverless with AI-driven scaling and optimizations automatically lowers
capacity during the day to handle dashboard workloads, but adds just
the right amount of required capacity on demand whenever a complex query
needs to be processed. Then overnight, Amazon Redshift Serverless
proactively increases capacity again to support large data processing
tasks without manual intervention. Building on existing self-tuning
capabilities, Amazon Redshift Serverless automatically measures and
adjusts resources and conducts a cost-benefit analysis to prioritize the
best optimization for a given workload. Customers can set their own
price-performance targets in the AWS Console, choosing to optimize
between cost and performance. Amazon Redshift Serverless with AI-driven
scaling and optimizations is available in preview. To learn more, visit aws.amazon.com/redshift/redshift-serverless/.
Genesys is a leader in AI-powered experience orchestration that helps
organizations engage with customers across any channel and empowers
employees in the contact center and beyond. "At Genesys, we use Amazon
ElastiCache to power high-throughput, low-latency storage for our
all-in-one cloud platform, enabling millions of customer interactions
per day," said Rob Gevers, chief architect at Genesys. "We expect Amazon
ElastiCache Serverless to help us improve performance and efficiency by
eliminating the need to provision instances and choose specific
configuration settings and scaling. With Amazon ElastiCache Serverless,
we can remove administrative overhead and offer a significant leap in
stability while providing the scalability we need to handle our growing
usage and variable workloads."
MIO Partners, Inc. is a global investment and advisory institution. "Our
developers spend significant time evaluating usage, configuring node
types, and designing cluster topologies to set up and configure cache
capacity," said Anand Mishra, chief technology officer at MIO Partners. "With
Amazon ElastiCache Serverless, we can create a cache in less than a
minute without any infrastructure provisioning, configuration, or
capacity planning. Amazon ElastiCache Serverless eliminates the need for
time-consuming capacity planning, improving our cost efficiencies and
providing us with better operational reliability. Now, we can redeploy
the team of engineers who were previously engaged in managing Redis to
projects that deliver higher value for our clients."
Peloton aims to help people around the world reach their fitness goals
through its connected fitness equipment and subscription-based classes. "At
Peloton, we collect and process a variety of data, ranging from
hardware sales to instructor trends and user workout data, to create and
refine our business decisions for better customer experiences," said
Jerry Wang, director of Data Engineering at Peloton. "However, analytics
workloads are becoming more complex, causing our database
administrators to spend a lot more time changing capacity thresholds and
performing manual database optimizations. Leveraging the new
optimizations capabilities in Amazon Redshift Serverless, we can
eliminate even more of the data warehouse management tasks, making it
more cost efficient while delivering better performance."
Quantiphi is a digital engineering company driven by the desire to solve transformational problems. "At
Quantiphi, we deliver tailored data analytics and machine learning
solutions for our customers, and Amazon Redshift remains the cornerstone
of our data warehouse services," said Sanchit Jain, data and
application practice lead at Quantiphi. "We have been hearing from our
customers that they want a solution that can also help them meet
price-performance within their budget constraints. The newly introduced
AI-driven scaling and optimizations in Amazon Redshift Serverless will
help improve our offering, bringing flexibility and intelligence to data
management and ensuring automatic, cost-effective scaling based on
historical query data. With this new capability, we can provide tailored
solutions for our customers who seek optimal price-performance while
adapting to ever-growing data volumes."
Tuya Smart offers a cloud platform that connects devices via the
Internet of Things (IoT) and empowers partners and customers by
improving product value and making consumer lives more convenient
through technology application. "Tuya's IoT Developer Platform has over
846,000 registered developers from over 200 countries, serving more than
7,600 enterprises with Tuya IoT solutions," said Chong Chen, head of
Data Infrastructure at Tuya Smart. "We have been using Amazon Aurora,
along with other AWS purpose-built databases, for more than five years,
but we had to build our own in-house sharding and proxy solution for
databases due to high write requests. We are excited that Amazon Aurora
Limitless Database can help us bring our IoT platform performance and
management to the next level by managing and scaling the write
throughput we need to serve our increasing customers base while
providing a consistent, smooth, and efficient response experience for
our customers, all without us having to use a self-managed solution."