• Home
  • Amazon
  • AWS Certified Data Analytics - Specialty AWS Certified Data Analytics - Specialty (DAS-C01) Dumps

Pass Your Amazon AWS Certified Data Analytics - Specialty Exam Easy!

100% Real Amazon AWS Certified Data Analytics - Specialty Exam Questions & Answers, Accurate & Verified By IT Experts

Instant Download, Free Fast Updates, 99.6% Pass Rate

AWS Certified Data Analytics - Specialty Premium Bundle

$79.99

Amazon AWS Certified Data Analytics - Specialty Premium Bundle

AWS Certified Data Analytics - Specialty Premium File: 233 Questions & Answers

Last Update: Mar 05, 2024

AWS Certified Data Analytics - Specialty Training Course: 124 Video Lectures

AWS Certified Data Analytics - Specialty PDF Study Guide: 557 Pages

AWS Certified Data Analytics - Specialty Bundle gives you unlimited access to "AWS Certified Data Analytics - Specialty" files. However, this does not replace the need for a .vce exam simulator. To download VCE exam simulator click here
Amazon AWS Certified Data Analytics - Specialty Premium Bundle
Amazon AWS Certified Data Analytics - Specialty Premium Bundle

AWS Certified Data Analytics - Specialty Premium File: 233 Questions & Answers

Last Update: Mar 05, 2024

AWS Certified Data Analytics - Specialty Training Course: 124 Video Lectures

AWS Certified Data Analytics - Specialty PDF Study Guide: 557 Pages

$79.99

AWS Certified Data Analytics - Specialty Bundle gives you unlimited access to "AWS Certified Data Analytics - Specialty" files. However, this does not replace the need for a .vce exam simulator. To download your .vce exam simulator click here

Amazon AWS Certified Data Analytics - Specialty Exam Screenshots

Amazon AWS Certified Data Analytics - Specialty Practice Test Questions, Exam Dumps

Amazon AWS Certified Data Analytics - Specialty AWS Certified Data Analytics - Specialty (DAS-C01) exam dumps vce, practice test questions, study guide & video training course to study and pass quickly and easily. Amazon AWS Certified Data Analytics - Specialty AWS Certified Data Analytics - Specialty (DAS-C01) exam dumps & practice test questions and answers. You need avanset vce exam simulator in order to study the Amazon AWS Certified Data Analytics - Specialty certification exam dumps & Amazon AWS Certified Data Analytics - Specialty practice test questions in vce format.

Domain 1: Collection

11. [Exercise] Kinesis Firehose, Part 3

So let's go ahead and reshape this configuration file to be what we need. The first thing we need to do is set up the end point for the fire hose. These first two lines are okay to keep as is, but if you're not in the US East One region, you'll need to explicitly specify where the end point for the fire hose is because it does default to US East One. If you're somewhere else, just head back to your AWS console here and go back to Kinesis. And you'll see here the purchaselog stream that you created earlier. And on this page, you should see the delivery stream ARN, and in there, you should see the region name that has been created within. So for me, that's us. East one. If that's the same for you, then you don't need to do anything. But if it is different, you'll need to go back to the settings here and type in the following URL: whatever your region is for me, that's US East One.amazonaws.com). just like that. All right, the next thing we need to deal with is managing permissions. We need to make sure that our Kinesis agent can talk to our delivery stream by default. It cannot. So there are a few ways of doing this, and you'll see me do this in a couple of different ways throughout the course. One is to explicitly copy in a set of access keys here, so you can actually put in a public and secret access key here within the configuration file. To do that, you would want to go back to your AWS console and go to IAM and create a new user. Don't follow along with me here, though, because I'm not actually going to end up doing it this way. just kind of illustrating another way of doing it. I could go to the users, create a new user, and then copy the access key and secret access key for that out somewhere. And with that, I could put in some settings here for AWS access key ID. Again, do not type this because I'm just illustrating, and then within the quote, you would have your access key ID comma. Then there's a line called AWS secret accesskey to whatever the secret access key is. Now, that is one way of doing it, and it will work if you do it that way. However, that's not really considered best security practise because if someone were to break into the CC Twonode and get this configuration file, they could potentially have the keys to get into your account with administrator access, and that would be bad. But if you see those lines and some configuration files later on in the course, that's where they came from. Let me show you a better way of doing it. So let's go back to our AWS console here and go back to EC 2. And from here, you can navigate to your running instances, select the one we just created, right-click it, and select Instance Settings, then Attach replace IAM Role. What we're going to do is attach an IAMrole to this entire EC2 instance to give it all the permissions that it needs without having to copy our credentials into the note itself. Now, from here, we can say "Create a new IAMrole" and we'll hit "Create Role," making a new one. It's going to be for the EC-2 service, of course. Click Next to go to the permission screen. and from here we'll just attach the administrator access policy to it. If you want to be even more secure, you could be more restrictive in the permissions that you grant to the CC Two node. But since we're going to be doing a lot of stuff with this node in the future, we'll just leave it open to pretty much everything. Click next again. And next again. Give it a name, say EC to Access, double-check everything, make sure you have AdministratorAccess policies attached to it, and create a role. All right, and now we can go back to the other tab here and hit the refresh button here to pick up that new role that we just created. and we should be able to select it now. There it is. EC Two Access and click "Apply. And there we go. So now we have attached our Administrator access role to this entire EC Two node. So we no longer have to put specific access key credentials and secret key credentials into our configuration files. That's a good thing. All right, so now that we have that out of the way, let's go back to our configuration file. And we need to set up the flows in here. So we're not using a Kinesis datastream here, we're using a Firehose stream. So this first block under the flows is for a Kinesis stream, and we're not using it. So I'm just going to delete that first block entirely. Hit Control K repeatedly to get rid of those lines until it looks like that. And now we'll just change the file pattern anddelivery stream to match our own set up here. So we'll delete the existing file pattern and replace it with Far slash Log slash Cadabra. Make sure you spell that properly, Log. And for the delivery stream, we will rename that to the name of our Kinesis Firehosestream, which is purchase logs just like that. And that's all there is to it. Double check everything; make sure it looks good. Make sure you have the proper endpoint in there for the region that you're using. If it all looks okay, make sure you don't have any extra commas in there or you're not missing any commas. Hit control.O to write that out. Enter to accept the file name, and Control X to exit. So everything is now configured properly to actually monitor our VAR Log Cadabra folder, and any new data that the Kinesis agent finds is going to be shoved into Kinesis Firehose, which in turn will shove it into S3 for us. All we need to do now is start the Kinesis agent. So to do that, let's type in "pseudo-service AWS Kinesis agent start." And that time It worked. Cool. Now we also want to make sure that it starts up automatically when we start our instance here. So to do that, we're going to pseudo check config" just like that. Make sure you pay attention to AWS Kinesis Agent." All right, now let's go back to our home directory, CD-Tilda, and we can actually kick this thing off. So let's type in "pseudo-loggenerator" (PY and we will pass with an argument of 500,000. That means that we are going to generate 500,000 orders in our log directory and see if Firehose successfully puts them into S 3. Let's go ahead and kick that off. It's all data, so it will take a moment. All right, it looks like it worked. We can go check it out, go to far Lacdabra, and there is a massive log full of 500,000 orders. So let's take a look at what CONDESCE is doing to monitor its progress. We can just type in "tailfforestawskinesisagent" and "awkinesisagent log" just like that. And here we can see what the agent is actually doing. You can see that it looks good so far. It looks like it has actually started to parse some of these records already. Taylor has parsed 237,000 records—"transform zero," blah blah," etc. Here, we see that it's actually already sent 142,000 records successfully to the Firehose destination. And we're just going to let that sit there until it's actually done. It will take about five minutes for it to actually get through all that data. So just keep an eye on this until it actually says that 500,000 records have actually been sent successfully. I'll pause and come back when that's done. All right, a few minutes have passed and it looks like it did catch up. We can see here in the log that it has processed 500,000 records and sent them successfully to the destination. So yay for controlling C to get out of that! And let's see if that data actually is in s three. Let's go back to our AWS console, select three, and you can see we have our order log sundog edge in your bucket. and you can see it did some interesting stuff here. It actually broke it up by date here for us automatically. So that is today's date and today's hour. And here you can see that it did in fact split that data up into five megabyte chunks. If you recall, we actually set the policy for this Kinesis Firehose stream to save data either every 60 seconds or every five megabytes, whichever comes first. And in this case, it was every five megabytes. So cool. We actually have separate log files here for each five-megabyte chunk of data, and we can go ahead and open one up and see what it looks like. Let's hit "open" with some text editor of your choice. And sure enough, there is some of our data here. So you can see that we have successfully published data through a firehose from an EC2 instance, from monitoring a log directory on our instance there, into Amazon S3, where it's broken up into CSV data that can later be analysed by other systems in the AWS ecosystem. So cool! Some of the groundwork is there that we're going to need for the rest of the course, but it's already in place and working now. So we now have an S-3 data lake of order data just sitting there waiting for us to play with. So, woohoo, let's move on and learn some more things that we can do with that stuff.

12. [Exercise] Kinesis Data Streams

So to illustrate the use of Kinesis data streams, we're going to start building out that order history app that we talked about earlier on. Now remember, what we're trying to do is build mobile app access for a user to be able to access a history of their orders on Cadabra.com andadd a whole this system looks like this. We're going to publish server locks from ECTwo, which we've already set up earlier. And now, instead of a firehose, we're going to publish that into the Kinesis data stream so that this data is accessible in real time. Later on we'll connect that to AWS Lambda and DynamoDB. But for now, we're just going to focus on this piece of it that actually connects our EC2 instances through the Kinesis agent to a Kinesis datastream instead of a Kinesis fire hose. And we'll see how simple that is. back to the AWS management console. Let's go into the Kinesis service here. Now, I want to stress that this costs real money at this point. Kinesis streams are not part of the free tier. Now, they're not terribly expensive. The stream that we're setting up will cost thirty cents a day, more or less, which comes out to roughly $10 per month. If you're not okay with that, do not follow along here. Okay? You have been warned. This costs real money. Not a lot of money, but real money. And if you forget to shut this status dream down when you're done, it could end up costing you more than you anticipate. Now, if you are really sensitive about cash, you can just delete this data stream when we're done with this exercise and go back and recreate it when you pickup this series of exercises later on. It's up to you. But I do want to warn you—make sure you understand that keeping the stream up and running costs real money—about thirty cents per day. Again, you can delete it and recreate it later if you need to. So let's go ahead and hit "Create Data Stream" here. This will create a real-time data stream as opposed to a firehose, which only works for a minimum of 60 seconds. We'll give it a name. How about Cadabra Orders? Again, pay attention to spelling because it matters and it's a made-up word. We only need one shard because we're just playing around here, and that will keep it nice and cheap. It does have this little tool here that will let you estimate how many shards you will actually need in production. But since we're just developing something here, one shard is sufficient. You do need to be able to understand the exam, however, and how you would figure out how many shards you might need for a given application. And it has a little reminder here that one shard only gives you one write capacity, two megabytes per second of read capacity, and up to 10 records per second. So remember those numbers. They are important on the exam. Go ahead and hit "create KineSIS stream." And the clock is ticking, and we are being billed about thirty cents per day. At this point, I'll take a little bit of time to spin up. not too much time. Meanwhile, we can go back to our EC-2 instance and configure our logs to actually write into that stream. Let's return to our EC Two instance in putty for now. I'm already logged in. Go ahead and log in if you need to. And we'll go back to the CD folder and edit our configuration file again. Pseudonan JSON. All right, so we just need to add in a new flow here to actually define our new Kinesis stream. And while we're at it, we're going to tell the client to actually convert our CSV data into JSON format so that our lambda function can use it a little bit more easily. And it's kind of a neat feature of the Agent that it can do that conversion for you. Now, to make life easier, I'm not going to make you type this all in. If you go to the course materials and go to the Order History folder, you will find the Agent Nano TXT file here. And all you have to do is copy the flow that comes out of it. So just select it from this bracket and paste it into that bracket there, then go ahead and copy it. And we're going to go underneath the flow section here and right-click to actually paste that in. Follow it up with a comma and a return to make sure it lines up. And it should look like that. So basically, we now have two different flows installed here. One we'll walk through is again looking at the Varla Cadabra folder, just waiting for new logs to come in. It's going to feed it into the KineSIS stream. There is no delivery stream for the firehose. an actual stream in real time called Cadabra orders that we just created. It will partition it randomly to evenly distribute the processing of that data. And for data processing options, we're going to tell it to use the CSV-to-JSON converter that's built into the agent. And we also gave it a list of all the fieldnames, so it actually knows what to call each field of data in the CSV as it converts it to JSON data. And again, that includes things like invoice numbers, dotcodes, descriptions, quantities, so on and so forth. We closed that off with an end bracket and a comma. Don't forget the comma. It's easy to forget that. And we should be in good shape at this point. Make sure your file looks like that control O to write it out. Enter and control X. All right, we're ready to rock and roll. Let's see if this thing actually works. To pick up that new configuration, we're going to restart the Agent. So to do that, we'll say "pseudo-service AWS Kinesis agent restart and shut down." Okay, and it started. Back up. Okay, if you did get a failure there, go back and double check that configuration file because something is probably wrong. Let's see if this is actually working. Let's go back to our home directory, CD-Tilde, and put a few more logs in there. Pseudo-loggenerator PY And by default, that will just write out 100 things, or 100 new lines. Let's go ahead and tail the log and see if it gets processed or not. Tailshashvarlogneisagentesisagentlog. And there you see it. Taylor has parsed 100 records, transformed 100, and successfully sent zero so far. And after a couple of minutes, you can see that it actually updated, and it has now successfully sent 200 records to destinations. Wait, Frank, you might say, I thought we only put 100 records in there. Well, remember, we're actually sending them to two different places at once. We're sending them both to our Firehose stream that we set up earlier and to our real-time Kinesis stream that we just set up now.So it's actually processing every record twice there, sending every record to two different places. So according to the log, things are happy and we actually have data going into our Kinesis stream. We can actually go back to our console. First, let's get C out of this tail. And if we actually click on that stream, we should be able to look at it, go to monitoring, and refresh that. And you can see that we are actually seeingsome data here in the Cloud Watch metrics here. We actually put some records here. You can see the latency and the counts that went in the hundreds. looks right to me. So sure enough, we do have data going into our Kinesis stream. Now we just have to do something with it. But we'll do that in our next hands-on exercise.

13. SQS Overview

So SQS is a much smaller topicexam for the big data exam. So I just want to give you an overview and a fresh refresher on SQS. Now, you don't need to remember all the details. Overall, we'll see the differences between SQS and Kinesis in the next lecture, but you just need to get an overall understanding of how SQS works. So let's have a look. SQS is a queue, and producers or producers will send messages to the SQS queue, and consumers or consumers will pull messages from that queue. That's basically how the queue works. Now, that looks very similar to Kinesis, but it's actually very, very different. So SQS is AWS's oldest offering, over ten years old. It's fully managed, and it will scale automatically. Even if you have one message per second or ten messages per second, there is no limit to how many messages per second you can send to SQS. The default retention period of a message is four days, but you can have a maximum of 14 days if you want to. There's no limit to how many messages can be in the queue, and it has extremely low latency, less than ten milliseconds for each publish and receive API. There is horizontal scaling in terms of the number of consumers. So you can scale as many consumers as you want, and you can have duplicate messages at high throughput because you get at least one's delivery occasionally. You can also have out of ordered messages, soit's not as good of an ordering as Kinesisand there's best effort ordering being done. Finally, the messages have to be smaller than Kinesis. There's a limitation of 256 messages sent, and you need to remember that limit. Now, how do we produce messages? Well, messages in Kinesis have a body, and as I said, they're up to 256. They're made of a string, so they're text, and you can add metadata attributes to them; you can add key-value pairs. Basically, on top of that message, you can provide a delay in delivery that's optional. And then that message is sent to SQS. What you get back from it is a message ID and an empty five-hash hash of the body—just to remember what you sent to SQS. So it's very different from Kinesis. Kinesis. Remember, we're sending string Kinesis here, not bites. We're having 1 MB total max size. Here we have 256 KB. OK, now that you're consuming messages, what does that look like? Well, consumers pull up SQS for messages, and they can receive up to ten messages at a time. And they have to process the messages within what's called a visibility timeout. And then when they're done processing that message, they will delete that message from the queue using the message ID they received and the handle they received the handle.That means that basically, when you have SQS, your consumers will pull messages, and your consumers will process these messages and then delete them from the SQS queue. So the messages cannot be processed by multiple different consumer applications. That's a very big difference versus kinesis. So remember that process. Consumers pull messages from SQS, process them, and then delete them. Now there is a new type of SQSQ. There was a standard queue before, and now there is a FIFO or FIFO queue, and that means first in, first out. It's not available in all regions, but I think it's almost there. And basically, the name of the queue must end in "FIFO" to indicate that it's a FIFO queue, and you get lower throughput. Now you have limits; you get up to 30 messages per second with batching or 300 per second without, so definitely lower throughput. but what you get is that the messages are processed in order by your consumers. So you get a "first in, first out" type of ordering. Messages will be sent exactly once, and you have a deduplication interval of five minutes if you send something called a Duplication ID. So it's a different use case, less suggested but more guaranteed on ordering if that's what you require. The FIFO queue works that if your messagesare sent 12345, then they will be alsoread in the order 12345 by your consumers. So about the limit of 256GB: How do we send large messages in SQS? It's not really recommended, but there is something called the SQS Extended Client, which is a Java library that uses a trick. Basically, it will use an Amazon S-series three-bucket as a companion and say okay. I'm going to send a very large payload. Maybe like. You know. 10GB or five megabytes 10 GB to S3, and it will send a message metadata in the SQS queue, and then the extended client on the consumer side will receive the small metadata message saying where the file is in S3, and the consumer will be able to read the large message directly from S three.So this is how you circumvent the 256 KB message limits if you want to use the SQS Extended Clients, which can be really helpful for big data. Now, use cases for SQS will be for decoupling applications. For example, if you want to handle payments synchronously or buffer rights to a database, For example, if you have a voting application, you expect a peak in throughput, and you need to be able to scale very quickly or handle large loads of messages coming in. For example, if you have an email sender SQS can also be integrated with auto-scaling through Cloud Watch if you have two instances reading from your SQS queue. Now what about limits? Consumers can only process a maximum of 1200 in-flight messages at a time. The batch request can only pull up to ten messages totaling 256 GB. The message content is text, so it has to be XML, JSON, or just text itself. And the standard queue itself, despite having an unlimited number of transactions per second. So you can have as many messages per second in terms of throughputs going into SQS, FIFOQ, FIFO Q, but batching supports up to 3000 messages per second. So remember that. And the mask message size is—I think ten times eight is 256GB. If you need more, you use the extended client. The data retention period can be anywhere from 1 minute to 14 days. But remember, once the messages are read, they are deleted from the SQS queue. In terms of pricing, it's very different from Kinesis. You're going to pay per API request, and you're going to pay for the network usage or fees as well. In terms of SQS security, if we use the HTTPS endpoint for SQS, which is the default, we get SSL encryption in flights. And we can have server-side encryption using KMS, for example. We can set the master key we want to use, and then SSC, the KMS will directly encrypt the body of the message we send to SQS but not the metadata itself. So message ID timestamps and attributes will not be encrypted by serverside encryption. The IM policy must allow the usage of SQS, so you can define which user can send data to which queue. And we can, on top of it, set an SQS queue access policy. If we want to restrict the security further down, it canget finer grain control, for example, over the IP, or wecan control over the time the request that come in. So that's it for SQS. just a high-level overview. You don't need to remember all these details, I should give you just the general idea, the limits, all that stuff. I hope that was helpful. And in the following lecture, we'll compare can you see data streams and SQS. So, until the next lecture.

14. Kinesis Data Streams vs SQS

So going into the exam, the important thing for you is going to be knowing when you should use Kinesis and when you should use SQS. That's really the important thing. And I want to drive home today with you. So, Kinesis data streams mean the data can be consumed multiple times by many consumers, and the data will be deleted after the retention period, going from 24 hours to seven days. The ordering of the records is preserved at the chart level, even during reply. So there is an ordering constraint baked right into Kinesis. And because you can have multiple applications reading from the same stream independently, it's called a Pub sub.Basically, you can use Spark or stuff like this to do something called a streaming job where you query the data, process it, and so on. There's a checkpointing feature that you can use with the KCl that interests DynamoDB if you want to track the progress of your consumption. And the shards, which must be provided ahead of time. We can have shard splitting or shard merging if you want to increase or decrease the number of shards we have. But we have to specify in advance how much throughput we expect to get in our Kinesis data stream for SQS. Well, it's a queue and it's used to decouple applications, and we only get one application per queue. We cannot get more than one because the records are deleted after consumption. The message will be reprocessed independently for the sending queue. So that means that there is an out-of-order ordering issue that can happen for you, but if you want to have ordering, you can have fault queues, but these queues will have decreased throughput, remember, up to 3000 messages per second. You also have the capability to delay messages, and you have dynamic scaling of load. So there's no APS. You don't just say in advance how many messages you're going to get in SQS versus Kinesis. Finally, remember that the Kinesis data stream has a message payload size of up to 1 MB, whereas SQS is 256 KB. So I think the differences are pretty clear in this slide. I mean, the use cases are different, but here's a table if you need even more help. So here I compare. Can you see data streams? Can you see Data Firehose, Amazon S Q Standard, and Amazon S QS FIFO in the same table? So all of those are managed by AWS. Can you see the data streams will provide ordering at the Shard key level? Can you see that the Firehose does not provide you any ordering, the SQS standard does not provide you any ordering, and Amazon SQS, if you specify a group ID, will provide you with ordering per group ID? in terms of delivery. Kenneth's data stream is at least once, so is Firehose, and the SQL standard is exactly once for SQS. You can replay data only for can yousee data streams not for Firehose and not. For SQS, the maximum data retention is seven days for data streams. So between one and seven days, datafirehose does not have any data retention because it's used to deliver data. And SQS can have up to 14 days to retain your data. In terms of scaling, well, you get 1Mb/second for producer, two megabytes per second forconsumer and hence panel or not. Then, for firehose, no scaling is required. It will scale for you SQS as well. And if you use SQSA, you have 3000 messages per second with batching, which is a soft limit. You could increase that if you wanted to. Now, the maximum object size for Kinesis datastreams is going to be 1 MB. For Kinesis data firehose, it's going to be 128 megabytes. That's the max buffer size you can set. So the destination can have maybe s three. You can have an object of 128 megabytes being written to, and for SQS, it's 156. You use the extended SQS clients. Okay, now that's for all the technicalities, but what about use cases? Maybe that makes more sense to you. Well, the SQS use cases will be to process orders, process images, auto scale queues according to messages, buffer and batch messages for future processing, insert data into a database, or maybe do some request uploading. So it's more of a development-driven workflow, whereas Kinesis is going to be more for big data. So we're going to get fast log and event data collection and processing, real-time metrics and reports, mobile data capture, real-time data analytics, gaming data feeds, complex stream processing, or data feeds from the Internet of Things. So this is, I think, the most important thing here. And this is going to be used for big data streaming, whereas SQS is being used for decoupling your applications. More from a developer's perspective. Okay, so I hope that really helps. This is all you need to know going into the exam. You must be able to distinguish SQS from Kinesis. I hope that was helpful, and I will see you in the next lecture.

15. IoT Overview

Okay, so we are getting into IoT. IoT stands for "Internet of Things." And this is quite a light subject at the exam, but you never know what kind of questions you may get. So I want to give you a high-level overview of all the services offered by IoT. You'll see in the ink that it makes a lot of sense. This light will basically explain anything you need to know. Then they'll take a closer look at each feature. So IoT stands for "Internet of Things." And because it's the Internet of Things in AWS, an object is called a thing. So an IoT thing can be whatever you want. Like here, it's a thermostat, but it could be a bike, it could be a car, or it could be a light bulb. It could be anything you want, really. So it'll just be a device that connects to your AWS infrastructure. And so you reconfigure these devices and want to retrieve and interact with them. So we want to retrieve data and interact with them. So how does it work? Well, this IoT thing is going to be registered with our IoT cloud in AWS. So it's going to have a "Thing registry." And a thing registry will have assigned the device an ID, ensuring that it is well authenticated, secure, and so on. Okay, then this IoT thing needs to communicate with our cloud. And through this, it uses something called a "device gateway." A device gateway is a managed service that allows you to communicate with your IoT things. Now, when your IoT thing, for example, is going to report something, say that the temperature is now 30 degrees Celsius, it's going to send a message to an IoT message broker. So a message broker is going to be like an SMS topic. It's going to be sending a message, and that message will be sent to many different destinations. For example, say we wanted to take that message and make it go through a rules engine. And this rules engine is telling you that whenever the IoT sends you a message, you should send it to Kinesis, SQS, Lambda, or one of the many other targets. So with this rules engine, we have the possibility to send our message to different targets. But you can do other things. You can also integrate with something called an IoT device shadow. And this device shadow is literally a shadow of your device. That means that even if your thermostat right here is not connected to the Internet, you can change its state. Here on the device shadow, for example, we can say, "Okay, we want the temperature to now be 20 Celsius." And whenever the device is reconnected, the device shadow will say, "By the way, you should be 20 degrees Celsius right now." So all of this, at a high level, is what you need to know. From an IoT perspective, I would say the most important thing is going to be the thing registry—to know that there is security, the IoT message broker, the world engine, and the device shadow. Once you get this architecture, it makes all sense. Now, just for your curiosity, I'm going to go into a little more detail about each of these features, but at a high level, you know, enough already. Now, before we get into the deep dive lectures, I'd like to give you a quick overview of AWS Cloud for the IoT console, because I think it has a really cool tutorial. So follow me. So for IoT Core, if you go to Services, you will see all the categories here. I just want to call your attention to "The Internet of Things" on the right hand side. It has a lot of different services. Now, thankfully, you don't need to know all of them. The only thing we'll focus on today is IoT Core. So all these things have green grass that I will describe very quickly. You don't need to know any of this. So let's go to IoT Core and choose our region, whatever region you have here. I'm not greeted with a tutorial because I already completed it, but let's go to the bottom left, where you can see a learn button that I'm going to click on. And the reason I'm doing this is because there is a small tutorial right here that is really well done. And I think it's amazing for me to use this as a learning tool for you. So let's go through the tutorial, and I'll be able to give you one more time the high-level overview of all the things that you described. but this time there will be animated diagrams. Okay? So let's start the tutorial. So this is the device gateway. So the device gateway basically enables your device to securely communicate with the AWS cloud. Okay? So say, for example, we have two things. There's a connected light bulb right here. This is a light bulb that's connected to our device gateway. And we also have a control unit here. Maybe it's in our house. And we can set red, green, or blue for our light bulb, okay? and it's connected to our device gateway. So they're connected. So, for example, if I click on green, it will send a message all the way to the device gateway and back into my light bulb. And now my light bulb is green. If I click on blue, my light bulb becomes blue and red. Obviously, it will become red. So as you can see, my two devices here, even though they're in my house, don't communicate right away with each other. What they do is communicate with this device gateway, which has a message broker inside of it, and they communicate through this device gateway to be able to change each other's state. So again, green will be sent to the devicegateway back into the light bulb sent for blue. So this shows really well the concept of what a device gateway is. Okay, that's the first step. Hopefully, that makes sense. Next is the rules engine. So here, you're able to define a bunch of rules, and you'd have to write them around. What happens when my control unit sendsa message to my device gateway? What should happen and how should the message change? So you can have very complicated rules, but here they show us a rule saying blue should be turned into green. So let's have a look. If we click on red, my light bulb is red. If you click on green, my light bulb is green. But if we click on blue, you see hereit came blue and it came out green. Let's do it again. Blue, green. because the rules engine here said the blue should be transformed into green. I can switch off that rule. And now, if I send blue, well, it turns out that my light bulb is going to be blue. So the rules engine contains a bunch of rules that you can define, allowing you to modify the behaviour of your devices based on some rules you define. So, if I click on blue again, it will turn green. Okay, I hope that makes sense. And you can also play with it. Now, with this rule engine, we can set up rule actions. and actions are really cool. Actions is what I had in my graph before, saying that when we define these actions, we can send data to Kinesis, SQS Lambda, et cetera. So here they're showing that we had the blue-to-green rule before. So again, if I click on blue, it's going to change to green and go back. But now, if red is going to be sent to everything on the right side, it will also be sent to my mobile device, myAWS lambda function, and a database. So let's click on "Red." And as you can see, the red message goes into all the directions because of this rule right here. So if you click on "RED," it goes into my mobile device, my lambda function, and my database. If I turn off this rule again, you'll see it's not connected anymore. So if you click on red, it will just go to my light bulb. So you can play around here as well and see what rules do what and so on. So here, what we need to remember is that the rules engine allows us to send data to many different targets within AWS. So as you can see, my devices don't interact directly with AWS Lambda. It goes through my device gateway and message broker. Okay, next is the concept of device shadow. Device shadow is really, really important to understand—and really weird at first to learn about. So hopefully I'll make a goodway of describing it to you. So the device shadow is going to be a shadow of your device. So here's my real device. This is my real lightbulb, and you can see it's red, and my device shadow in my cloud is within the blue cloud. The device shadow is also red. Now if I click on "blue," I'll just deactivate every rule. For now, if I click on blue, then my light bulb is blue and my device shadow is blue as well. So like the name indicates, my device shadow is literally a shadow of my device. So why do we need this? Well, for example, say that the WiFi in my home stops or that my light bulb stops communicating with the A cloud. Now I want to change the colour to blue. So I'll say okay, you should become blue, and only the shadow here gets updated. Okay? And that means that within the irresponsible, my device is supposed to be blue. But this one doesn't know yet becauseit's not connected to the cloud anymore. But as soon as it gains that connection, the device shadow will send a message back to it saying, "Hey, by the way, you should be blue," and it becomes blue. So basically, to control the state of devices that are even offline, we can use a device shadow. For example, I'll set it to red. Okay? My device shadow is red now, and I will turn back on my light bulb boom anytime. It becomes red. So the device shadow is a way for you to say you are able to control your offline devices by using the device shadow in the AWS cloud. Okay? Now you can just turn on all the rules and have some fun and play.But the cool thing here is that you can turn on enough switches to see what would happen with the device shadow, the Rolls engine, and so on. Finally, it's saying that this is cool when you have a light switch in your house, but you can also programme an application, for example, this one, which allows you to control the light bulb from your mobile device like your iPhoneto control the light bulb.And you're saying okay, I want it to be green. It will send a rest API call directly into the IoT cloud, and my light bulb will become green. So, I think this is a really good animation overview of how the entire IoT solution works. I hope you liked it. And in the next lecture, I'll do a deep dive into all these services, but hopefully that's a good enough introduction. Alright, I will see you in the next lecture.

Go to testing centre with ease on our mind when you use Amazon AWS Certified Data Analytics - Specialty vce exam dumps, practice test questions and answers. Amazon AWS Certified Data Analytics - Specialty AWS Certified Data Analytics - Specialty (DAS-C01) certification practice test questions and answers, study guide, exam dumps and video training course in vce format to help you study with ease. Prepare with confidence and study using Amazon AWS Certified Data Analytics - Specialty exam dumps & practice test questions and answers vce from ExamCollection.

Read More


Comments
* The most recent comment are at the top
  • Michael
  • Brazil
  • Sep 25, 2020

@vivian, lol, you can find the latest and most valid amazon das-c01 practice test questions here. i assure you that these Qs will make you comfortable with the real exam environment. i prepared for my assessment using them and they helped me clear it with excellent marks. try them out in your revision and you’ll not have regrets! oh, BTY, no need to pay for them. ))

  • Sep 25, 2020
  • vivian
  • United Kingdom
  • Sep 23, 2020

hello there, sorry my silly question: are the DAS-C01 practice test questions offered by Exam-Collection valid? should I use them in my revision 4 the exam? Thx

  • Sep 23, 2020
  • keagan_PS
  • South Africa
  • Sep 22, 2020

guys! for excellent exam prep, you’ll need to dl amazon das-c01 practice tests. FYI, all sorts of questions that are likely to be featured in the actual test are available on these materials!!! with them i achieved a very high score in my exam.960 points!!!!! use them as they won’t cost you a penny!!!

  • Sep 22, 2020
  • Pablo
  • United States
  • Sep 21, 2020

@jacky3512, well, the Amazon DAS-C01 dumps offered here r rich in useful info for the exam. these resources helped me immensely 2 grasp the core concepts & gain the relevant skills. at least i hope so. IAC they were 4 sure pivotal in my gr8 performance in the test. use them w/o fear of disappointment. GL & tons of best wishes! ))))

  • Sep 21, 2020
  • Rosemary
  • Italy
  • Sep 18, 2020

wow! i’m really happy for acing my test and earning Amazon AWS Certified Data Analytics - Specialty certification. TBH, you’ve done an incredible job dudes from Exam-collection… you’ve designed very helpful training products for this test. IMHO, Exam-collection is the best site for everyone preparing for DAS-C01 assessment…..

  • Sep 18, 2020
  • jacky3512
  • Australia
  • Sep 17, 2020

hi ppl, i’m in the lookout for high-quality das-c01 dumps? r the ones from this platform worth using? waiting 4 honest opinions asap , TY!

  • Sep 17, 2020
  • Samuel
  • India
  • Sep 16, 2020

@susan, XOXO!!! the DAS-C01 practice tests provided on this platform are wonderful!!! they not only helps u 2 measure ur skills and knowledge but also helps u to combat all the nervousness u have for the exam. moreover, they’ve same format as the actual test. use them in ur prep and they’ll make u effective enough to get through the assessment. strongly recommend!!!

  • Sep 16, 2020
  • susan
  • United States
  • Sep 16, 2020

hi guys….looking 4 the best materials 4 das-c01 exam …plsssssss recommend!

  • Sep 16, 2020
  • Gayathri
  • United States
  • Jun 13, 2020

practice test for data analytics

  • Jun 13, 2020
  • Dan
  • United States
  • Jun 10, 2020

Getting ready for this exam

  • Jun 10, 2020
  • goo
  • Singapore
  • May 01, 2020

want to pass AWS data analytics

  • May 01, 2020
  • rajnish
  • Canada
  • Apr 13, 2020

i need this exam now

  • Apr 13, 2020

Add Comment

Feel Free to Post Your Comments About EamCollection VCE Files which Include Amazon AWS Certified Data Analytics - Specialty Exam Dumps, Practice Test Questions & Answers.

SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.