AWS re: Invent 2018 — a brief review
-
2615
-
0
-
0
-
0
AWS re:Invent is an annual conference held at the end of November, announcing and describing all the innovations AWS has developed through the year. Let’s see what is in stock for 2019.
Last year we covered the introduction of AWS Fargate and Aurora, the new generation of EC2 instances, the updates to Glacier and other AWS products announced during AWS re:Invent 2017. The general summary of this year’s conference is as follows: AWS is living up to their promise made at AWS Summit 2018 London: to invest heavily into augmenting the cloud offers with Artificial Intelligence, Machine Learning, and Big Data Analytics features.
The company announced over 20 new AI/ML offers and updates to the existing services, as well as managed blockchain development tools. Below is a brief overview of new AWS features, services, and products presented during the AWS re:Invent 2018 week.
Integration with Microsoft FS and Lustre
AWS has long since supported Microsoft Active Directory and workplaces. The next step is enabling to run Windows File Server infrastructure on AWS hardware and integrate it with other AWS products in your account. The introduction of Amazon FSx for Windows File Server enables the corporations that have invested heavily in their Windows-based ecosystem to move to AWS nearly effortless, lifting-and-shifting their existing structures and workloads without the need to rebuild them from scratch. This option is also available for Lustre and ensures cross-compatibility, security, granular control, ease of management and high tunability of the infrastructure. The service can be configured to work at 2,048 Mb/second throughput with systems as big as 64 TB.
Pay-on-Demand as an alternative to PAYG for DynamoDB
DynamoDB is a great tool for cloud architects, providing multi-region table location, multi-master configuration, in-memory caching, encryption at rest, point-in-time recovery, etc. However, when the size of the read/write traffic cannot be predicted, one must either plan for capacity and enable auto-scaling, or risk lags.
Pay-on-Demand is a new approach to the configuration of DynamoDB performance. The introduction of a pay-on-demand billing model enables the following use cases:
- Development of serverless apps running under the pay-per-use billing
- Testing new apps, whose database workload is hard to predict
- Deploying a separate data table per customer for independent SaaS providers
While this billing model does not support auto-scaling, it provides a great way to save money by minimizing the number of resources actually consumed. AWS customers can switch between different capacity planning modes while creating their tables or once per day in production.
Deep Learning acceleration with Elastic Inference from AWS
AWS has been actively imbuing their AI/ML services using the GPU capabilities of P3 instances. A single Amazon P3 has the immense power of 8 NVIDIA V100 GPUS and operates on the scale of up to 1 PetaFLOP. A decade ago such power was available only with the IBM supercomputer — now it is here for any data scientist to use.
However, rigid instance parameters quite often mean overpayment, as finding the balance between the CPU, RAM, and the required volume of GPU power is quite hard. AWS thought of the way to rectify such a discrepancy and introduced the Elastic Inference feature, allowing the developers and admins to attach exactly the required quantity of GPU power to their Deep Learning projects. Flexibility instead of pre-defined capacities is a great improvement for all AI/ML projects running on AWS.
Amazon SageMaker updates: Ground Truth and Managed RL
One of the biggest efforts and investments involved in the process of training a Machine Learning model under a supervised learning approach is labeling the data. There are services like Amazon mTurk, there are public datasets like ImageNet, YouTube-8M, MNIST, there is the manual labeling approach — yet open datasets might turn out to be irrelevant for your case while hiring enough workforce yourself can be too costly or too long.
This is why Amazon SageMaker now provides the Ground Truth — a new feature allowing to utilize Machine Learning to label the data needed to train your Machine Learning model! The point is — when SageMaker is confident the labels are correct — it labels the raw data itself. If human intervention is required (the images are too complicated for the model to label, for example) — the images are sent to human attention for labeling. After they are labeled, they are sent back to the model, further improving its efficiency. Applying GroundTruth can cut your AWS SageMaker dataset training expenses by 70%.
Another new SageMaker feature announced during the event is the Managed Reinforced Learning. Reinforced learning is an approach to ML model training when the algorithm is rewarded by human-defined functions for making the right decisions. This helps the algorithm to develop the right approach to decision-making for the tasks that are more difficult than telling a cat from a dog. Leveraging the Amazon SageMaker RL feature drastically lowers the time needed to train your ML model.
In addition, AWS Marketplace solutions range was increased with more than 150 ready-made Machine Learning models. These can be bought and configured by any AWS customer, enabling multiple use cases in retail, manufacturing, media and other categories — and the list grows daily.
Amazon Personalize — engage your audiences through Amazon-built recommendation engine solution. The AutoML algorithm combines 20 years of Amazon experience to automatically deploy, train and run ML models of your choice and work with datasets stored at Amazon S3 or with streaming data sent from your systems live.
Final thoughts on AWS re:Invent 2018 news
The updates, releases, and announcements listed above form only a portion of the vast list of tech introduced during AWS re:Invent 2018. The full list includes updates to Amazon Glacier, AWS RDS, a managed blockchain from AWS and a Quantum Ledger tool for building decentralized apps on AWS infrastructure, and much, much more.
We can draw the following conclusions:
- Despite several interesting updates, AWS IaaS offers are mature and we must not expect tectonic shifts in this domain
- The company changes the emphasis from providing services to providing AI-augmented solutions, turning internal modules and tools public
- AWS devotes great effort to building ML-powered ecosystem of services to drive more value to their customers, cost-efficiently and with ease.
We are sure the upcoming DevOps events and conferences of 2019 will show new brilliant product launches from AWS, further increasing the range of possibilities for their customers. AWS helps businesses build data-driven, ML-powered solutions to better manage their infrastructure and engage their audiences. Isn’t it what every business wants?