AWS Summit ASEAN 2023 | Simplify data management with modern data architecture on AWS (INSO203)
- December 15, 2023
- Posted by: MainInstructor
- Category: Amazon Web Services Data Science Go Python Scala SQL
Video Title: AWS Summit ASEAN 2023 | Simplify data management with modern data architecture on AWS (INSO203)
[Applause] hello everyone I’m SEMA Gupta and I’m a principal solution architect with AWS thank you for taking time to tune into our session at AWS asan Summit let’s first browse through our agenda for next 30 minutes we will start with how data can help us achieving Better Business
Outcomes then we will talk about how to create a modern data strategy and a approach to building a modern data architecture then we will spend some time diving deep into components of modern data architecture post that I will have our AWS customer Autodesk team join us and share how did they build
Components of their modern data architecture and some of their learnings along the way first let’s understand how data can help us achieve some business outcomes today customers are making better and faster Decisions by breaking down data silos right data strategy can also improve customer experience and hence loyalty datadriven insights can help
Organizations to innovate faster which will help them be ahead of competition data can also help you to understand business situation better which which will help you to come up with solutions for the future this is in addition to reducing your overall operational cost and optimizing your business processes but there are
Challenges data volumes are increasing exploding from terabytes to petabytes sometimes exabytes of data new types of data are being added like structured data unstructured data streaming data in real time and so on on premises data stores and Data Solutions can’t scale to handle these data volumes fast enough not to miss other aspects
Like data security and data compliance that’s why we need a strategy to build a modern data strategy there are three pillars modernize unify innovate for modernizing your data infrastructure we seen customers doing this by moving to cloud and moving The Unwanted heavy lifting to the cloud provider secondly unifying your data by
Breaking down data silos so that data can be put to work across the databases data analytics and machine learning Services the third pillar of modern data strategy is innovate there you can invent new experiences and reimagine existing processes with modern data strategy you can move and store any amount of data
And you can access the data seamlessly you can also control who has access to the data with the proper security and data governance control these pillars do not require sequential implementation you can be working on all three in parallel depending upon your data Journey this slide represents layered modern
Data architecture but before we dive deep into it data Discovery is almost like a prerequisite as part of data Discovery you define business value by conducting multiple interactive sessions with your stakeholders you identify your data consumers like data analysts data Engineers business analyst and data scientists that’s when you talk about
Tools to bring data into your data platform which forms first layer of technology for your modern data architecture in the second layer you focus on storage to store structured and unstructured data since you will be ingesting data from a wide variety of data sources you need to have a data
Catalog for all your data sets which forms third layer of your data architecture then you need to process this data en reach this data and transform this data as part of data processing layer then you need to enable your users to be able to consume this data protecting your data across all the
Layers is a mandate which forms the last layer of this layered modern data architecture there are two main advantages of layered modern data architecture first you can build it incrementally and independent of each other second when you make changes to one layer it doesn’t impact any parts of
The architecture but before that let’s go a bit deeper into each of these layers let’s look at the ingestion layer and understand our data sources we could have structured data from Erp applications and CRM applications and we could also have semi structured data from web applications and no SQL
Databases there could be unstructured data from iot sensors so we need purpose-built AWS services for ingesting data from different type of data sources for example with AWS database migration service or DMS you can ingest data from databases directly into your Amazon S3 data lake or Amazon redshift data warehouse
You can also use GMS for your change data capture use cases for business application data in files that are stored in network storage drives you can use AWS data sync and move data to Amazon S3 or you can use no devices for PETA scale data transfer use cases to
Unlock the value of SAS applications data like sales force service now data dog you can use Amazon app flow a fully managed service which can connect to various s applications and inchest data into your Amazon S3 data lake or Amazon red shift data warehouse organizations are also exchanging data files with
Partners which contain valuable data typically these partner data feeds are exchanged through FTP so you can use AWS SFTP to bring partner data feeds into your storage layer organizations also like to ingest data from third-party data sources like Market insights historical data feeds and consumer databases you can ingest third-party
Data feeds into Amazon S3 data Lake Landing Zone by subscribing to third- party data products using AWS data Exchange AWS data exchange includes hundreds of data sets collected from popular public sources many organizations are using a wide variety of custom data sources as well to bring these custom data sources
Into storage layer and to make meaningful business decisions you can use from 100 plus AWS glue connectors which can discover and integrate with a wide variety of data sources or you can also look for connectors in AWS Marketplace for your streaming data like log files generated from your applications iot sensor data social
Media feeds you can use Amazon Kinesis data streams or Amazon msk the modern data architectures storage layer is a combination of Amazon S3 data Lake and Amazon red shift data warehouse typically Amazon red shift is used for business intelligent reports and interactive queries whereas Amazon S3 data is typically used for
Interactive queries big data processing and machine learning you can bring data from a wide variety of data sources into your storage layer like Amazon S3 and Amazon red shift and use commands like copy command you can use Federated queries to query across your data warehouse data laks and operational
Databases now redshift ml makes it easy for SQL developers to build machine learning workloads on top of their data warehouse data while building modern data architecture on AWS you will start ingesting hundreds to thousands of data sets from a wide variety of data sources therefore a central data catalog is
Needed for your data sets we can use AWS glue data catalog for your central data cataloges AWS glue clawler can scan your data and create a central data catalog for your data sets AWS glue data catalog integrates with many AWS services like Amazon dead shift Spectrum Amazon EMR
Amazon Athena Amazon Kinesis data stream and Amazon msk now let us zoom into data processing layer you can use AWS glue as a serverless option for your batch data processing and streaming data processing pipelines to transform enrich your desra sets into raw transformed and curated data zones or you can also use Amazon
EMR which supports popular Big Data Frameworks like spark Hadoop etc for batch data processing or streaming data processing pipelines that’s not all you can also use purpose-built services for your streaming data like Amazon Kinesis data analytics which uses Apache Flink for stateful and complex data streaming
Processes or you can use AWS Lambda for simple stateless streaming data processing a w s glue provides serverless data processing runtime where you can process structured and unstructured data with offerings like AWS glue Studio data Engineers can build data processing jobs easily without writing any code similarly with AWS data
Brew data scientists and data Engineers can build data processing pipelines AWS glue interactive sessions come with Jupiter notebooks where data scientists and data Engineers can incrementally build data processing pipelines with AWS glue you can seamlessly move data across many data stores in vent 2022 we also announced a new feature where you can
Implement your data quality checks using native functionality of AWS glue coming to data consumption layer use the right tool for the right job by using purpose-built data analytics services for example you can use Amazon EMR for big data processing Amazon Athena for interactive queries Amazon Kinesis data streams and Amazon msk for realtime
Analytics Amazon open search for operational analytics Amazon redshift for cloud data warehouse house Amazon quicksite for business intelligence and AWS glue for data integration use purpose built databases for specific use cases and business requirements for example you can use Amazon RDS and Amazon Aurora for relational databases
Amazon Dynamo DB for no SQL databases Amazon Neptune for graph databases we are also bringing machine learning closer to data services we have machine learning integration with Aurora ml Neptune ml red shift ml Athena ML and quicksite ml as well in modern data architecture consumption of data is also
Enabled through machine learning AWS offers sagemaker a studio based machine learning platform that offers tools that can be used in All Phases of machine learning life cycle including data preparation model building model training and tuning deployment and hosting in addition for development teams with no data science skills we
Offer pre-trained AI Services which could help companies across different Industries deploy common AI use cases like AI enabled contact centers personalization intelligence search in intelligent document processing predictive maintenance content moderation or identity verification to enhance their customers experience improve their employees productivity and optimize their business processes while we at AWS provide
Variety of tools there are hundreds of AI use cases across Industries and business functions successful business outcomes from AIML depend on the careful selection of The Right Use case customers want to explore the art of possible with AI and learn from other companies that have successfully implemented AI the AI use case Explorer
Is an easy to use Search tool that makes it easier to find the most relevant use cases and case studies depending on customers industry business function and desired business outcome coming to the last layer of modern data architecture data governance plays a key role in modern data architecture some of the AWS Services
Which can help you with data governance include AWS identity and access management and AWS Lake formation for fine grained access controls you can build fine grained access controls by using Lake formation at database level table level column level as well as role level AWS Lake formation has a native integration with many AWS
Services if you want to build an Enterprise data platform where you have multiple data producers and multiple data consumers you can use data mesh architecture by using Federated governance with data permissions managed by AWS Lake formation Amazon data zone is a new data management service to catalog discover
Analyze share and govern data across organizational boundaries now that we have discussed different layers of modern data architecture let’s have a quick glance at reference architecture for modern data analytics on AWS you will find a lot of AWS Services icons here we have spoken about most of these in last few
Minutes something to note you’re not going to use all these services in your solution you’re going to choose Services depending on your use case and business requirement this SL represents a reference architecture to demonstrate how all AWS Services can come together to form different layers of modern data
Architecture now that we have learned the various layers of modern data architecture let’s hear from Autodesk teams how they have built storage layer processing layer and consumption layer using some of the AWS Services thank you SEMA hello my name is vamshi and I’m a principal engineer with autod disk as we
Learned from SEMA the storage layer in the modern data architecture is a combination of Amazon S3 data Lake and red shift data warehouse while Amazon S3 is an exabyte scale storage service used to store any type of data such as structured unstructured and semi-structured data Amazon red shift is
An sql-based cloud data warehouse that is fast and supports analytics on structured and semi-structured data in the next few slides I will share our experiences building a modern data platform at our organization we call it the enterpr prise integration Hub or EI H before going through our modern data
Architecture I would like to throw some light on our Legacy data management system at Autodesk in order to provide seamless experience to users and customers our applications are backed by multiple Enterprise systems these data sources in the slide represent all those Enterprise systems the data engineering teams rely on these systems to to
Generate insights and reports by performing simple to complex Transformations and share it with the downstream applications data from these systems is sync to a relational database which acts as repository for our raw data sets this database is the source for our data Engineers to extract the data for
Integration using an ETL tool we transform the data as per our needs and load the curated data into our Data Warehouse Systems the curated or transformed data sets are further consumed by our Downstream applications for analytics reporting or for other operations let me highlight that all the infrastructure that was required to
Handle this was set up on premise now let me quickly take you through the challenges we faced with this setup scaling was a major challenge as traditional databases only support vertical scaling and vertical scaling implies making changes to your Hardware we often faced bottleneck due to ETL compute capacity limitations causing
Delays especially when complex ETL pipelines were up for execution the software licensing model was not flexible and required Financial commitment in advance for a specific duration of course not to forget the challenges related to setting of the infrastructure such as security and and maintenance all these problems pushed us
To explore the emerging tools and Technologies related to the data warehouse and ETL as we explode we realized that moving to Cloud was inevitable this slide represents our current architecture in order to understand our modern data architecture I must first talk about Amazon red shift as a cloud data warehouse red shift had
All the features we wanted with red shift we just need to launch a cluster by choosing a cluster configuration that suits your needs and the infrastructure behind the cluster is fully managed by AWS you can scale in or out or change cluster configuration easily and its processing speed is very fast we
Explored red shift further because it is payer use and requires no commitment and we then decided to go ahead with red shift as our cloud data wouse in addition to Red shift we chose metian as our Cloud ETL tool for injection and transformation of the data metian is a modern cloud-based ETL tool
That was built specifically for cloud data warehouses like red shift metian runs on an E2 instance and it uses the push down mechanism by placing queries on the data warehouse which in this case is red shift as shown in the architecture diagram we connect to our Enterprise applications using materion
And load the rod data sets into rut shift we run Transformations on this data and generate curated data sets as per our requirement choosing the right Technologies for your modern data architecture is only the beginning a good plan and design are needed to ensure that resources are utilized
Efficiently and access controls are in place in the next slide I would like to briefly talk about our high level database design and access control as a first step we worked on the database Bas design and made sure that we follow the best practices in database design to
Avoid problems related to the data mismanagement and avoid complexity as data grows after thorough research on our data systems and the type of data we would need to handle we designed the schemas in our database to accommodate data sets in the most efficient way that would later help in Easy retrieval of
The information and easy maintenance as well as an next step we worked on the access control system to facilitate the data access for our consumers in a secure manner we created groups each with different set of privileges so that users are provided access to the schemas through groups we later made sure that
The tables are created with the right sort key and the distributed key an option provided by Red shift at the time of table creation to efficiently store and retrieve the data in a fast manner all these measures helped us achieve efficient access management speed and security this slide shows our current
Cluster configuration for red shift and metan we use red shift ra3 the latest cluster type by Red shift that enables us to manage compute and storage needs separately we use four nodes of size X plus the current data size at our e is around 60b and it is growing rapidly we
Have over 600 metian jobs for injection and transformation and we use two m5. x large instances for medion we have our job scheduled around the clock to spread out and use the cluster resources efficiently in our journey so far especially with red shift I would like
To share some of the key learnings that helped us overcome some of the crucial IAL challenges as we decided to move from dense compute cluster that is dc2 to ra3 Cluster configuration we were not sure what the cost and performance implications would be we then discovered a red shift utility called the replay
Tool which helps simulate the performance of your desired cluster configuration replay tool helped us assess the performance of ra3 nodes compared to dc2 and helped us in choosing ra3 with confidence another key learning we had was that you should design your jobs carefully a poorly designed ETL job
Might consume your cluster CPU and force you to add more nodes only to discover later that it won’t help so always focus on fixing your jobs first when you face the CP utilization problem the last one secure your KMS key that is in use by your red shift cluster if you lose your
Key you won’t be able to access the cluster and believe me the cluster snapshots are of no use as well without the key with this we conclude the storage layer of our modern data architecture I will now hand over to my colleague monei who will cover about data processing and consumption patterns
Thank you vamshi hello everyone I am monei a senior principal data engineer with Autodesk in next few minutes I’ll take you through how we have built processing layer of modern data architecture using AWS glue I will also touch upon some of our consumption patterns using AWS data and machine learning
Services as the amount of data at organizations grow making use of that data in analytics to derive business insights growth as well for the efficient management of these data each ETL process is necessary at Autodesk we used AWS glue to perform complex ETL tasks AWS glue is
A serverless ETL tool where once we specify source and destination of data it can generate the code in python or Scala for the entire ETL pipeline this really helps us streamlining the data integration operations and allows users to parallelize heavy workloads data consumption is an important factor in today’s world as the
Rate of data consumption is unable to keep up the pace withd rate it’s been generated one of the possible reasons could be that the outcome is always generated in a relational format analyzing humongous amount of data in relational database is is not feasible thus it is important to understand alternate ways of consuming
The same data set using different tools so that passing through the data becomes faster and more feasible AWS data and analytics services are purpose-built to help you quickly extract data ins sites using the most appr appropriate tool for the job at Autodesk we had a used case where we needed to be
Able to identify relationships and patterns fast in the vast amount of data without waiting for relationships to materialize over weeks or months this is where we needed a graph database in a graph database relationships have as much value as the data itself a graph database transforms a complex web of dynamic data into
Meaningful relationships to help deliver realtime insights and action in our specific scenario we had a slowly changing dimension in our sales data where certain columns were changing over a period of time storing those in Partition basis or relational databases would have increased the data volume dramatically and consuming them using the same Pace
Becomes practically impossible so we stored this in AWS Neptune which is a fully managed database service that makes it easier to build and run graph applications this way we could only add new notes as in when those columns or Dimensions have changed thus analyzing them became much more feasible and
Practical to load data into graph database we use the programming languages that our team was comfortable with which is python or Scala and to Traverse the graph we used open-source Library which is Gremlin in summary I can say that relational databases are no longer suitable for realtime big data analytics
Or self-service business intelligence because the scope of such database is too limited a common Mis misconception that has circulated over the years is that graph processing is only relevant for traditional social network data or network channels but even Gartner has predicted that graph processing and graph databases
Will grow at 100% annually over the next few years to accelerate data preparation and enable more complex and adaptive data science in my opinion Ai and ml together will transform Enterprise data Management in the coming years talking about our used case at Autodesk we had a requirement where we
Had to predict the health of our Autodesk customers or account on a daily basis as far as renewals are concerned key here is to generate the health of customers on a regular basis as as part of old process data engineering team used to generate data by performing multiple heavy operations
And post that the data was sent to data science team for creating models and predict the account Health this whole process was very manual and took a long time in our modern architecture glue has come to data engineering teams rescue in many such incidents by automating the endtoend data engineering and data
Inferencing to produce final result more quickly and consistently this ensures our data is always updated for better data driven decision making our data science team is using Amazon salemaker AWS machine learning platform to train the machine learning models for predicting account Health like other cloud services there are new enhancements or
Features in AWS glue this requires glue version upgrades in production environment that could be quite time consuming thanks to aw infrastructure as code service called cloud formation we can just upgrade our glue version by changing the configuration file and deploying it which hardly takes
1 to 2 minutes for a bunch of glue jobs in this case maybe 10 to 20 glue jobs we also experienced that with every latest version of glue the Dax which is directed a cyclic graph are more optimized which resulted in Faster execution of same job with same volume
Of data as glue is serverless this also resulted in less cost which is a win-win situation for consumers like us and at the same time makes it easy to migrate more and more towards glue with earlier version of AWS glue the startup time used to be 5 to 10
Minutes 5 to 10 minutes startup time sometimes result in lot of waiting while testing the code AS 90 to 90 5% of the AWS road map is determined by customers feedback AWS listen to their customers pain point which is Us in this case and improve the startup time to only 5 to 10
Seconds glue can also be integrated easily with external services like snowflake anaplan postgress SQL and many more this really makes life easier during any migration projects where external systems are involved with this I will conclude my session thank you and I will hand over to SEMA for the rest of the session
Thank you monei now that you have learned about modern data architecture and heard from Autodesk teams if you want to accelerate your modern data strategy in AWS you can leverage any one of the many programs available across modernize unify and innovate please work with your AWS account manager or
Solution architect to learn more about these programs this slide gives you a glimpse of various programs or channels available to you you can also learn with AWS training and certification visit skill Builder to explore over 600 free digital courses Hands-On labs and role based games thank you again for your
Time today please do fill the survey for us to understand your needs better to serve you better
Video Keywords: Amazon Web Services, AWS,Amazon Web Services,AWS Cloud,Amazon Cloud,AWS re:Invent,AWS Summit,AWS re:Inforce,AWS reInforce,AWS reInvent,AWS Events
-
Sale!
Wireless WIFI Repeater Extender Amplifier Booster 300Mbps
$29.99$14.99 Add to cartWireless WIFI Repeater Extender Amplifier Booster 300Mbps
Categories: Electronics, Wi-Fi Router, Wireless Wi-Fi Extender Tags: 300Mbps, 802.11N, Amplifier, Booster, Extender, mobile wi-fi booster, Remote, WIFI, Wireless, Wireless WIFI, Wireless WIFI Repeater, Wireless WIFI Repeater Extender, Wireless WIFI Repeater Extender Amplifier, Wireless WIFI Repeater Extender Amplifier Booster, Wireless WIFI Repeater Extender Amplifier Booster 300Mbps$29.99$14.99 -
Sale!
Full RGB Light Design Gaming Headset Headphones with Mic
$24.99$14.99 Add to cartFull RGB Light Design Gaming Headset Headphones with Mic
Categories: Electronics, Gaming, Gaming Headsets Tags: Design, Full, Full RGB Light Design Gaming Headset, Full RGB Light Design Gaming Headset Headphones, Full RGB Light Design Gaming Headset Headphones with Mic, Gamer, Gaming, Gaming Headset Headphones, gaming headset wireless, Headphone, Headphones, Headset, Light, Mic, Package, RGB$24.99$14.99 -
Sale!
Wireless BlueTooth Multi-Device Keyboard Mouse Combo
$39.99$19.99 Add to cartWireless BlueTooth Multi-Device Keyboard Mouse Combo
Categories: Electronics, Gaming, Gaming Keyboards, Keyboard Mouse Combos Tags: Combo, Keyboard, keyboard mouse combos, Mouse, MultiDevice, Set, WireKeyboard Mouse Combo, Wireless, Wireless BlueTooth Keyboard Mouse Combo, Wireless BlueTooth Keyboard Mouse Combos, Wireless BlueTooth Multi-Device Keyboard Mouse Combo, Wireless BlueTooth Multi-Device Keyboard Mouse Combos$39.99$19.99 -
Sale!
High Back Leather Executive Adjustable Swivel Gaming Chair with Headrest and Lumbar
$199.99$139.99 Add to cartHigh Back Leather Executive Adjustable Swivel Gaming Chair with Headrest and Lumbar
Categories: Gaming, Gaming Chairs Tags: Adjustable, Chair, computer chairs, Desk, Executive, Gaming, Girl, Headrest, High, High Back Leather Executive Adjustable Swivel Gaming Chair, High Back Leather Executive Adjustable Swivel Gaming Chair with Headrest, High Back Leather Executive Adjustable Swivel Gaming Chair with Headrest and Lumbar, High Back Leather Executive Adjustable Swivel Gaming Chairs, Leather, Lumbar, Office, Racing, Swivel$199.99$139.99 -
Sale!
Professional LED Light Wired Gaming Headphones with Noise Cancelling Microphone
$29.99$19.99 Select optionsProfessional LED Light Wired Gaming Headphones with Noise Cancelling Microphone
SKU: N/A Categories: Electronics, Gaming, Gaming Headsets Tags: Cancelling, Gaming, Gaming Headphones with Noise Cancelling Microphone, gaming headset, Headphones, Headset, LED, Light, Mic, Microphone, Noise, Professional, Professional LED Light Wired Gaming Headphones, Professional LED Light Wired Gaming Headphones with Noise Cancelling Microphone, Wired, Wired Gaming Headphones, Wired Gaming Headphones with Noise Cancelling Microphone$29.99$19.99 -
Sale!
Gaming Desk with LED Lights USB Power Outlets and Charging Ports
$349.99$249.99 Select optionsGaming Desk with LED Lights USB Power Outlets and Charging Ports
SKU: N/A Categories: Computer Desk, Gaming, Gaming Desk Tags: and Charging Ports, Charging, Desk, Desks, Gaming, gaming desk with led lights, Gaming Desks with LED Lights, Home, LED, Lights, Monitor, Office, Outlets, Port, Power, Room, Stand, USB, USB Power Outlets, White, Workstation$349.99$249.99 -
Sale!
Wired Mixed Backlit Anti-Ghosting Gaming Keyboard
$99.99$79.99 Add to cartWired Mixed Backlit Anti-Ghosting Gaming Keyboard
Categories: Electronics, Gaming, Gaming Keyboards Tags: Antighosting, Backlit, Blue, brown, Gaming, Gaming Keyboard, gaming keyboards, gaming keyboards and mouse, Keyboard, Laptop, Switch, Wired, Wired Mixed Backlit Anti-Ghosting Gaming Keyboard, Wired Mixed Backlit Anti-Ghosting Gaming Keyboards, Wired Mixed Backlit Gaming Keyboard$99.99$79.99 -
Sale!
Wireless Bluetooth 5.3 ANC Noise Cancellation Hi-Res Over the Ear Headphones Headset
$119.99$59.99 Add to cartWireless Bluetooth 5.3 ANC Noise Cancellation Hi-Res Over the Ear Headphones Headset
Categories: Electronics, Gaming, Gaming Headsets Tags: 5.3 ANC Noise Cancellation Hi-Res Over the Ear Headphones Headset, ANC, Audio, Bluetooth, Cancellation, Ear, Earphone, gaming headset, Headphones, Headset, Hi-Res Over the Ear Headphones Headset, HiRes, Noise, Wireless, Wireless Bluetooth 5.3 ANC Noise Cancellation Hi-Res Headphones, Wireless Bluetooth 5.3 ANC Noise Cancellation Hi-Res Over the Ear Headphones Headset, Wireless Bluetooth 5.3 ANC Noise Cancellation Hi-Res Over the Ear Headphones Headsets$119.99$59.99 -
Sale!
Wired Sports Gaming Headset Earbuds with Microphone
$19.99$9.99 Select optionsWired Sports Gaming Headset Earbuds with Microphone
SKU: N/A Categories: Gaming, Gaming Headsets Tags: Accessories, Earbud, Earphone, Earphones, Gaming, gaming headset with microphone, Headphones, Headset, IOS, Microphone, Sports, Wired, Wired Sports Gaming Headset Earbuds, Wired Sports Gaming Headset Earbuds with Microphone, Wired Sports Headset Earbuds$19.99$9.99 -
Sale!
150W Universal Multi USB Fast Charger 16 Port MAX Charging Station
$49.99$29.99 Add to cart150W Universal Multi USB Fast Charger 16 Port MAX Charging Station
Categories: Charging Stations, Electronics Tags: 150W, 150W Charging Station, 150W Universal Multi USB Charging Station, 150W Universal Multi USB Fast Charger 16 Port MAX Charging Station, 150W Universal Multi USB Fast Charger 16 Port MAX Charging Stations, 150W Universal Multi USB MAX Charging Station, 16 Port MAX Charging Station, 3.5A, Charger, Charging, Fast, laptop charging stations, Max, Multi, Port, Stand, Station, Universal, USB$49.99$29.99
Great presentation Seema and Vamshi. The sheer number of services that you had to cover would have been a daunting task for anyone. Really good summarisation. Thanks.
Very good presentation!