• Home
  • Microsoft
  • AZ-303 Microsoft Azure Architect Technologies Dumps

Pass Your Microsoft Azure Architect AZ-303 Exam Easy!

100% Real Microsoft Azure Architect AZ-303 Exam Questions & Answers, Accurate & Verified By IT Experts

Instant Download, Free Fast Updates, 99.6% Pass Rate

AZ-303 Premium Bundle

$79.99

Microsoft AZ-303 Premium Bundle

AZ-303 Premium File: 213 Questions & Answers

Last Update: Mar 14, 2024

AZ-303 Training Course: 93 Video Lectures

AZ-303 PDF Study Guide: 926 Pages

AZ-303 Bundle gives you unlimited access to "AZ-303" files. However, this does not replace the need for a .vce exam simulator. To download VCE exam simulator click here
Microsoft AZ-303 Premium Bundle
Microsoft AZ-303 Premium Bundle

AZ-303 Premium File: 213 Questions & Answers

Last Update: Mar 14, 2024

AZ-303 Training Course: 93 Video Lectures

AZ-303 PDF Study Guide: 926 Pages

$79.99

AZ-303 Bundle gives you unlimited access to "AZ-303" files. However, this does not replace the need for a .vce exam simulator. To download your .vce exam simulator click here

Microsoft Azure Architect AZ-303 Exam Screenshots

Microsoft Azure Architect AZ-303 Practice Test Questions in VCE Format

File Votes Size Date
File
Microsoft.test4prep.AZ-303.v2023-12-26.by.zhangqiang.126q.vce
Votes
1
Size
4.38 MB
Date
Dec 26, 2023
File
Microsoft.pass4sure.AZ-303.v2021-12-08.by.stanley.121q.vce
Votes
1
Size
4.32 MB
Date
Dec 08, 2021
File
Microsoft.selftestengine.AZ-303.v2021-10-20.by.harry.113q.vce
Votes
1
Size
3.88 MB
Date
Oct 20, 2021
File
Microsoft.pass4sureexam.AZ-303.v2021-08-19.by.oscar.106q.vce
Votes
1
Size
3.15 MB
Date
Aug 19, 2021
File
Microsoft.pass4sure.AZ-303.v2021-04-04.by.zara.97q.vce
Votes
1
Size
2.19 MB
Date
Apr 06, 2021
File
Microsoft.pass4sure.AZ-303.v2020-10-19.by.lucy.30q.vce
Votes
2
Size
558.58 KB
Date
Oct 19, 2020
File
Microsoft.selftestengine.AZ-303.v2020-07-28.by.ryan.25q.vce
Votes
2
Size
436.6 KB
Date
Jul 28, 2020

Microsoft Azure Architect AZ-303 Practice Test Questions, Exam Dumps

Microsoft AZ-303 Microsoft Azure Architect Technologies exam dumps vce, practice test questions, study guide & video training course to study and pass quickly and easily. Microsoft AZ-303 Microsoft Azure Architect Technologies exam dumps & practice test questions and answers. You need avanset vce exam simulator in order to study the Microsoft Azure Architect AZ-303 certification exam dumps & Microsoft Azure Architect AZ-303 practice test questions in vce format.

Implement Azure Infrastructure

8. ARM Deployments Walkthrough

So far we've been deploying resources using either PowerShell or the Azure Portal. This is a very easy and quick way of doing things, but sometimes you want more control from the point of view of repeatability control and from the point of view of being able to keep hold of your code that you use when deploying things and reuse it easier and maybe even keep it in a library. In order to achieve this, there is another way we can deploy resources, and that is through the use of Resource Manager templates or Arm templates. In anything that we've deployed so far, if we go into it, there is a Properties for Export template. When we go to the export template, what we see is a JSON-formatted file that defines the source that we've deployed whenever we deploy something either within the portal or even using PowerShell. What's actually happening is that in the background, these templates are being generated and then used by the Azure infrastructure to deploy your resources. Because of that, we can actually use these templates ourselves to directly create our resources. There are two ways we can get the resources, or technically three. The first is that we can build a resource manually through the portal, go through to export the template, and then we could save that template or add it to a library. And we'll come back to the library shortly. Once we've downloaded that template or added it to our library, we can edit it and then redeploy using the edited settings. So for example, on this one we could take the virtual machine name, change the name, and redeploy the new virtual machine. However, a better way of doing it would be to use a clean template. If we go to GitHub and go to GitHub Azure, Azure Quickstartemplates, you'll find that we actually have a number of templates that have been predefined by Microsoft and Macros employees that we can use as a quick start. And if you search through here, you'll see a wide range of available templates. Let's take a look at the 101 Vmsimple Linux as an example. Within this repository, we can see two main files that we use. There's Azure Deploy. JSON and Azure deploy parameters, and we'll come back to those in a second. But within this repository, you'll actually see these two buttons here. The first is to visualize. This is quite useful, and if we click it, it runs this visualisation tool that basically reads that JSON and shows us what gets deployed. So as you can see from the JSON in that template, it's deploying a virtual machine that's got a public IP, a VNet, a security group, and a network interface. We can even click on this deploy to Azure. And if we do that, what it basically does is upload the template to Azure and create a UI form based around the template. We can then use this form to go in and change the various items that we want, fill in the various details, and so on. Or we can even edit the template itself, which allows us to go in and edit how the template is built or edit the parameters, which is where we define the unique settings for the template in question. Once you fill in the parameters, you would click "purchase" and that would basically start the deployment of that virtual machine. The other thing we can do is download those template files directly. So here we've downloaded the Azure deployed JSON and the parameters file from the GitHub template, and we'll have a quick look at what an Azure template looks like. First of all, within the template we define a number of parameters. These parameters are variables that we can input through our parameter file. In this way, we can create a template that requires these parameters and reuse that template for multiple resources. simply updating the parameters file accordingly. So if you look at this example one, we've got a parameter for the VM name, and you can see it sends certain defaults if we don't actually provide any information. It's asking for a username, an authentication type, which again is defaulting to password, a DNS label prefix, and so on and so forth. As you can see, we can also constrain the allowed values that we would input for any of these. And if we carry on going down, we eventually get to a variable section. So the variables section is similar to the parameters in that it allows us to define certain variables. However, we can't actually enter anything in these variables via the parameters file. They're more hardcoded. However, what we would normally use these variables for in Section 4 is to dynamically create variables based on the parameters that we've already given. So for example, we want to define a public IP address name. Now, rather than asking the user to define that within the parameters, we can use this concatenation command to take the parameter VM name that we've added and then simply tag it on the string "public IP." So in this way, we can dynamically create variables that can then get used further on in the script. After the variables section, we come to the resources section, and the resources section is where we actually define what is going to get deployed. So in this Linux example, we can see that, first of all, we're creating a resource of network interfaces. The name of that network interface is going to come from the variables, which, if you remember, were generated from the VMname, and then the interface will be added on the end of it. Again we're setting the location, which is coming directly from the parameters, and then we've got this depends upon section. So the depends-upon section means that we can control what items get created then. So even though we've defined this network interface as first up here, it actually won't try to create it until these three resources are created first, which are the network security group, virtual network name, and the public IP. We then get the properties of the resource, and each resource will be slightly different. So for a network interface, we're going to have an IP configuration where we set things such as subnet, private IP allocation method, and so on. And if we look through these resources, we'll see all the various items that can be configured that make up the virtual machine. And if we remember from the visualization, they include a network interface card, a public IP address, and a network security group. So once we've defined our actual template file, we then define a parameters file, and the parameters file is where we will define the individual parameters for the particular instance of the resource we're creating. So, for example, if we wanted to deploy a Linux virtual machine based on that template, within this parameters file we'd enter some of the values that we wanted to set. So for example, we're setting a local user name, a password, and a DNS label. Once we have our parameter files and our template file, there are two ways we can actually go ahead and create the resource. First of all, we can simply say "Create resource" (the tag "Search for template"), choose "Template deployment," click "Create," and then we can either use "Start from some common templates" again, which would pretty much pick up the template we had earlier. As you can see, there are other options: web apps, databases, or even picking templates directly from the GitHub Quick Start template that we were looking at earlier by simply going in here and starting to search for the one that we might want. Alternatively, we can say "build your own template" in the editor. Then we could go ahead and say load file, supply our JSON file, and then once that's loaded, we could save it. And again, it brings up a similar screen to what we saw earlier, which is essentially the same screen that we saw earlier when we did the deployment from the GitHub template where we can now go in. Set certain variables, such as the resource group, or even edit the parameters directly, which reads the main template file and generates the parameters that we can then go in and add in the details to. From there, we will click "Purchase," and again, it will kick off the deployment of our resource. A new thing we can do is actually add it to a template library. So if we go to all resources and search for template, we'll see this templates option here, and within here, we can then actually go and add our template that we created earlier. We can copy our template in, okay, then click Add. Then we have a set of templates that we can deploy at any time we want, and we can build up our own library accordingly. In the next lecture, we're going to look at a final way we can use an Arm template to deploy our resources. And that's using Azure. DevOps.

9. ARM Deployments with DevOps Walkthrough

as well as deploying resources using templates directly through the portal. You can also use something called Azure DevOps. So Azure DevOps is a full CI/CD and Git repository platform that allows you to fully manage all aspects of development and deployment work. It's a free service, or at least it's free for up to five users within your organization. And then beyond that, you can pay around $15 per month per user, depending on the level of access they need to it. If you go to Dev Azure.com and sign in with your account, if you haven't already, it's a very simple process of going through and setting one up as we've done here. So within an organization, you can create individual projects. And I've created a project called Arm Templates. And within projects, you have a number of different options. So there are boards where you can manage workloads and work items in an agile manner. We've got repositories, which are pretty much the same as repositories within GitHub. And it allows you to upload and sync your code, share it with teams, and so on. And in fact, that's what we've done here. So I've taken the Linux template that I downloaded and modified earlier and uploaded it into a repository. So we've got our Azure Deploy JSON file here, which defines a simple Linux distro. Then I've also uploaded a parameters file, and in our parameters file I've defined my username, password, and a DNS name for the service that I want to deploy. I also added this file, called a YAML Pipeline. So within DevOps, you can create pipelines to build, test, and deploy code. And there are two ways you can create pipelines. The first is going through the gua and doing it fully using a more traditional method where you pick individual tasks. However, a more up-to-date way is by using something called YAML Pipeline. And the great thing about YAML pipelines is that you define them in a configuration file, which means you can keep that pipeline configuration with the rest of your code, which makes it much more portable. So we define individual stages to define a YAMLPipeline. So for example, here we've got a stage for building virtual machines, and then within a stage you have a job, and within a job you have steps and tasks. So this first stage is about taking our JSON files, and it builds the pipeline, which basically creates, say, a publishable artefact here and then sends that artefact into an artefact called a virtual machine. Our next stage actually deploys the virtual machines. So what this is basically doing is downloading the artefact that the build process creates and then deploying it. And we've configured things like the subscription I want to deploy to the resource group I want to deploy to the location, as well as the deployment pass, deployment template, and parameter file that I want to use. So once we've got all that in place, I can go to my pipelines and we can say "create a pipeline." So, first and foremost, I'll demonstrate using the Classic Editor. So we're going to ignore the YAML files for now. So we're going to say to use Classic Editor. So I'm going to tell you it's coming from Azure repos. As you can see, we can actually also hook into GitHub, GitHub Enterprise, or even your own Git repositories. So I'm selecting the Arm template project and the Linux repository, and I'm choosing the master branch. I've only got one branch for this, so just click Continue. So then we have a number of pre-built templates that we can use. Actually, what I want to do doesn't fit any of these. So what I'm going to do is create an empty job. So once our job is defined, I'm going to give it a name. So classic Linux distro because I'm thinking of the classic pipeline editor So these things get run and built on what's called an "agent pool." So within Azure, the DevOps platform spins up virtual machines, and on those virtual machines, we download the code to them from our repositories, and they perform those builds on deployment. So we can either have the built-in agent pools, which are the hosted ones, or you can actually create your own private agent pools rather than running those agents within your own subscription or even on premise. When using the agent pools, you can then define your agent specifications, which you can see here. We can build and run our pipelines. I've run a macOS, Ubuntu, or Windows service. So I'm going to go for the default Windows. The next thing I need to do is define my agent job. So what I need to do is click this plus here, which is going to create a task. And the task I'm looking for is an Arm deployment. So if I do a search for Arm, we get this Azure Resource Group deployment option. So I'm going to click Add to that, and then this comes up with this task here. So what I want to do is, first of all, select my subscription. So I'm going to go there and select my subscription. The first time you do this, you'll get an authorised button here so that you can sign in and authorise your subscription with this DevOps pipeline. So the action is I'm going to create or update a resource group, and I need to select what the resource group is. So we're going to say RSG Linux VMs. We now need to tell them the location. So I'm going to go for UK south.Now the template location is actually a linked artifact. That means it's an artefact that's coming from our repos, which we defined in our get sources. But what we need to do is tell it where the template file is and where the parameters file is. First, we'll go to the ellipses here to select our template file, and we want the deploy JSON. And then similarly, I need to tell it that there's the parameters file. You can actually override some of those parameter options in here, but I'm going to leave those as they are now, and in fact, I'm going to leave all that fine. So if we wanted to kick this off, I would click Save and queue. In fact what I'm going to dois I'm just going to save that. Once that's created, I can go back to my pipelines view, and I can see my classic Destroy pipeline here and again from here. I could go and run that pipeline, and that will use the Arm templates and on premises file to connect to my subscription and spin up a Linux virtual machine. We're going to do it that way. Next, I'm going to show you how to do a pipeline using the YAML file. So I'm going to tell you it's a YAML file that's in my repo, and it's automatically picked up here in the repo from my template. I'm going to tell it to use an existing YAML file, and I need to go into the path, and it's detected that in there is a file called Pipeline YAML. So it's going to offer me that, and I'm going to click Continue. So it then loads the file in, and this is basically just a file that was in our repository. Again, that's defining the stages and the jobs. You don't need to note in too much detail how these are defined for the exam. It's more important just to understand the fact that you can use them and what the different options are. But once that's all there, I can then just go ahead and click "Run," which then creates my pipeline and then kicks it off. So using the YAML file, you can see it's much simpler because I've defined it in here, and then within the pipeline itself, I just need to reference that file, which means I could quite easily pick up that entire code base, move it into another Azure DevOps organisation or project, and just copy that across again. So this one's applied tellme that the permissions needed. OK, so we can go into there, and I'm just going to tell it to permit that. And the first thing it's doing is performing this build task, which is basically just getting all the files together, checking them, validating them, and then creating a deployable artifact. Then in the deploy task, it's actually downloading that artefact and then kicking off the deployment itself. So if you look at our resource group, I've gone to the Linux VMs resource group. I'm going to go to this deployment option here. So we can see here that we've actually got this deployment that's been kicked off, and we can see that it's currently in the deploying stage, and if we go into it, we can see more details. You can see it's quite a VNet network with a goodgroup, a public IP, and a network interface card. And it's finally just the actual deployment of the VM itself. That will take a few seconds to complete. And you can monitor it either within the Azure Portal itself or within the pipeline itself. Once that's actually completed, we'll see our new Linux VM that we've built here, and that has been built based on the parameters that we put in the configuration file. The whole point of this is that there are many ways in which we can manage and deploy our resources. And this doesn't just apply to virtual machines. You can use this for any resource in Azure Web Apps, SQL, Azure -- anything you can create through the portal, you can create through Arm templates, either directly, as we saw earlier, by loading the templates for the portal, or through the Azure DevOps Pipelines. And the main reason why you might want to use DevOps pipelines is because they give you greater control. So once you've defined a pipeline, you've got that pipeline analysis and associated code ready to go whenever you need to deploy it. So you can repeat that deployment to a different subscription or environment, or if you need to destroy it for whatever reason—perhaps you want to save money when you're not using the resource—you can destroy it and then restart it in the pipeline whenever you need it. So the point is, it gives you this repeatable, auditable method for deployment.

10. Storage Overview

Azure Storage is Microsoft's cloud storage solution for modern data storage scenarios. It offers massively scalable object stores for dataobjects, a file system service for the cloud, and a messaging store for reliable messaging. It also includes a very basic NoSQL store. It's important to understand that AzureStorage is durable and highly available. This means that redundancy ensures that your data is safe in the event of hardware failures. You can even opt to replicate data across datacenters or even geographical regions for additional protection. Azure Storage is secure. All data written to Azure Storage is encrypted. Azure Storage provides you with fine-grained control over who has access to your data and gives you options such as who can read, who can write, and whether to provide further access. Azure storage is scalable. It's designed to be massively scalable to meet data storage and performance needs; it's managed. Microsoft handles all the hardware maintenance updates and critical patches for you. And finally, it's accessible. Data storage is accessible from anywhere in the world over HTTP or HTTPS. There's also a wide range of software development kits for a variety of languages, including Net, Java, NodeJS, Python, and so on. It also has a mature REST API. This means you can control the service directly over the Internet using standard Rest API calls, which in turn are used by things such as PowerShell, the Azure CLI, or even third-party tools such as the Azure Storage Explorer. Azure Storage can be used for a wide range of tasks. For example, it can be used as storage for virtual machines. This includes discs and files. Disks are persistent block storage for AzureiOS virtual machines, and files are fully managed file shares in the cloud. It also enables you to store unstructured data, such as blobs and data Lake stores.Blobs are highly scalable Rest-based object stores, and you can even use Hadoop distributed file systems like the Data Lake Store. Finally, you can also store structured data. This includes Azure Table Storage, Azure Cosmos DB, and Azure SQL DB. Tables are basic key-value stores with no SQL stores. Cosmos DB takes this to the next level, providing a fully globally distributed database service. And Azure SQL DB is a fully-managed SQL database service. And all these run on top of Azure Storage. Let's take a closer look at the various storage options available to you. First, we have Azure blobs. So these are massively scalable object stores, so think text or binary data. Next, we have Azure files. So this is more like a traditional file-sharing service. But rather than having to manage virtual machines, it's already there and managed for you. We have Azure queues. So Azure Queues are a messaging store, which allows you to write messages to the queue from one application and have another application read those messages from the queue. And then we have azure tables. As a result, Azure tables are the straightforward no sequel store. These are the standard key pairs for structured data that you can use. Let's go into a bit more detail for each of these options. First, let's look at Azure Blob Storage. So Azure Blob Storage is Microsoft's object storage solution for the cloud. Blob storage is optimised for storing massive amounts of unstructured data, such as text files or even binary data such as images. Blob storage is ideal for serving images or documents directly to a browser. because all of those documents are accessible via a URL You can store files to distribute stack tests. It's great for streaming video and audio files. It's also used as the backing store for the various Azure backup solutions and disaster recovery services, and even archiving services. High-speed storage is also a great solution for storing data. For further analysis by on-premise or Azure-hosted services, objects in block storage can be accessed from anywhere in the world by HTTP or HTTPS, so users or client applications can access those blogs via URLs. The Azure Storage Rest API PowerShell, Azure CLI, or even Azure Storage client library can all leverage these. The storage client libraries themselves are available in multiple languages, including Net Java, NodeJS, and Python. Next, we have Azure files. So Azure Files enables you to set up highly available network file shares that can be accessed using the SMB Server Message Protocol. This is a protocol that's generally used if you set up your own file server on a Windows server. This means that multiple VMs can share the same files with both read and write access. You can also read the files using REST interfaces using APIs or Storage Client Libraries, which makes integrating external applications with the services even easier. One thing that distinguishes Azure files from files on corporate file shares is that you can access those files from anywhere in the world using a URL that points to the file, but it allows you to protect those files using either RBAC access, role-based access control, or even something called shared access signatures. You can generate Shared Access Signatures, or SAS tokens, and they allow specific access to private assets for specified amounts of time or even to specific IPS. File shares can be used for many common scenarios, such as a replacement for onsite file servers. They're also great for storing configuration files or even diagnostic logs and metrics. Next, we have queue storage. So the Azure Queue Service is used to store and retrieve messages. Queued messages can be up to 64 KB in size, and a queue can contain millions of messages. Queues are generally used to store lists of messages that are then processed. Asynchronously, for example, say you've got a customer who wants to be able to upload photos to a service, and from those photos you want to automatically generate thumbnails for them. You could make your customers wait while you create thumbnails as they upload photos, or you could have them upload photos and those photos are sent to a queue. Once the customer has finished the upload, that queue is monitored by another application, such as an Azure function. That Azure function can then retrieve the message from the queue, and from that message, which contains the uploaded image, it creates the thumbnail. Each of the parts of this processing can be scaled separately, giving you more control, and because the customer isn't waiting for it to finish, it's asynchronous. Finally, we have Azure Table Storage. So, as we said, Azure TableStorage is a simple NoSQL implementation. It is, however, now part of the wider Azure Cosmos DB family. We'll go into more detail in Cosmos DB and Table Storage later on, but as a brief overview, Jean storage is simply like basic tables. Within those tables you can have entities, and each of those entities will consist of one or more columns. So think of them as very, very simple Excel spreadsheets or even simpler, standard SQL tables. When creating storage accounts, it's important to understand that there are two different account types. There are standards and premiums. So standard storage accounts are backed by magnetic drives or HDDs. These are the lowest cost per gigabyteand the best applications that require bulkstorage where data is accessed frequently. Premium storage accounts are backed by solid-state drives, or SSDs, and therefore they offer consistent low-latency performance. However, it's important to understand that this can only be used by Azure Virtual Machine Disks and is best for IO-intensive applications. Additionally, virtual machines that use premium storage for all discs qualify for a 99.9%, or four-nine, SLA. Even when running outside of availability sets, it's important to create your storage account as the correct tier from the offset, as it's not possible to convert standard accounts to premium or vice versa. As well as storage account types, there are always storage types. When you create a storage account, you can choose from general purpose, of which there are two versions, version one and version two. Generally, you would use version two unless you've got a legacy application that specifically wants version one, and these are ideal for general use, such as all the other services such as tables, queues, files, and blobs. Generally speaking, these are what you would use for most day-to-day operations. However, as an option, you can always create a storage account as a Blob storage account, which is a specialised storage account for storing unstructured data. Now, Blobs are available in general-purpose accounts, but Blob storage accounts are more specialised and provide a few extra options. In particular, the extra options they provide you with are the ability to use different frequency tiers, which include Hot, Cool, and Archive. Essentially, a hot access tier indicates that the objects in your storage account will be accessed more frequently. The actual storage costs for these are a little bit higher in the three tiers; there are no transfer or access costs. At the other end of the scale, we've got the Archive Tier. So, Archive Tier offers the lowest cost per gigabyte of storage. However, you do get charged for accessing the data. For this reason, the Archive Tier is best for documents that are no longer needed and just need to be stored for historical purposes. Then in the middle we have the Cooltier, which basically sits between the two. So they are slightly cheaper to store and have a hardier texture. However, there are transfer costs, but those transfer costs aren't quite as expensive as the archive to you. As mentioned, storage count can often be accessed using URLs, and it's important to understand how these URLs are defined. For the four different services, you will have a different URL to access them from, but they all pretty much work along the same principles. By default, HTTP is defined as the only allowed option. However, this can be switched off, but it is not recommended. Then you have your account name. So when you're naming your storage account, it'sgot to be unique across the Azure infrastructureand then depending on the service you want,will determine the next part of the URL. So, for example, blob, table, queue, or file Core Windows is then installed. Finally, after that URL, you have a different structure penalty. Again, whether you're using files, blogs, queues, or tables, blogs and files work in pretty much the same way. You either have a container name or folder name, and then within that folder or container you can either have additional folders or files themselves. In the case of blob storage, you can simply have a table or queue name after the main URL. As you saw with the URLs, the URL itself will generally be something like blob.co.windows.net. This might not be something you always want, so you might want your own custom domain name. There are two ways we can actually achieve this. The first is direct CNAME mapping, and the second is intermediary mapping. With DNS mapping, you enable a custom domain for your, for example, blob storage accounts, and you provide a CNAME record that maps to that end target. So for example, if your company's domain is contoso.com, you might want to map Blobs.Comtoso.com, and so you would just set up a CNAME in your DNS and set the target as the full Azure URL. Intermediary mapping uses something called Error as a verifier. Mapping a domain that is already new to Azure may result in minor downtime as the domain is updated. If you have an application with an SLA by the domain name, you can avoid downtime by using the second option, the ASP verifier subdomain, to validate that domain. By prepending verify to your own subdomain, you permit Azure to recognise your custom domain name without modifying the DNS record for the domain. After you modify the DNS record for the domain, it will be mapped to the Blobendpoint with no downtime for the exam. It's just important to understand that in order to create your own custom domain names, you will create a CNAME record that points to the underlying record. However, with direct CNAME mapping, you just create the CNAME record, but this can cause downtime as the mapping updates itself. In order to avoid that, you would use the intermediary mapping option by first creating an ASN verify record, and then once that's validated, you would create your real CNAME record. Azure provides a number of options for ensuring that your storage accounts are safe. The first are access keys. So to access the storage, you generally access it via access keys, shared access signatures, or with RBAC integration. Access keys are static keys, that is, they don't change, although you can change the manual if you wish, and they are basically alphanumeric character strings. They are often used for programmatic access. The second option is shared access signatures, which are time-limited and can be restricted to specific IPS. On top of that, you can set specific firewall rules, and you can limit access to storage accounts from specified virtual networks. RBAC integration means that you can limit who can access storage accounts to specific users in your Active Directory, and all storage accounts are always encrypted at rest. By default, the keys used to encrypt the data will be stored within Azure and managed by the Azure Platform. However, it is possible for you to supply your own keys. Finally, it is possible to enforce and require SSL access to those storage accounts. When you create a storage account, you have a number of options for defining the redundancy. The various options are locally redundant. Storage. This replicates your data three times within a single data center, and it provides eleven nines of durability for objects over a given year. Locally redundant storage is the lowest-cost option and offers the least durability compared to other options. If a data centre level disaster, for example, fire or flooding, occurs, all replicas within a storage account using LRS may be lost or unrecoverable. Next we have zone redundant storage, or ZRS. This replicates your data synchronously across three storage clusters within a single region. Each storage cluster is physically separate from the others and is located in its own Availability Zone. When you store that data in a storage account using the ZRS application, you can continue to act and manage your data if an Availability Zone becomes unavailable. ZRS provides excellent performance and low latency. Next up, we get geo-redundant storage. Georgundant Storage is intended to last at least 16 years on your objects. If your account has DRS enabled, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region isn't recoverable. Therefore, if you opt for GRS, you have two related options to choose from: standard GRS, which replicates your data to another data centre in the secondary region so that that data is available to be read only if Microsoft initiates failover from the primary to the secondary region. Alternatively, you can have read access to georedundant storage. This is based on GRS, but RAGRS replicates your datacenter in the secondary region and also provides you Access with the option to read from the secondary region. With rags, you can read from the secondary region regardless of whether Microsoft initiates a failover or not. And then finally, we have a new option, which is Geor Redundant Storage, which manages the high availability of Zone redundant storage with protection from Geor Redundant Storage data Energy ZRS StorageCamp is replicated across three Azure Availability Zones in the primary region and also replicated to a secondary geographic region for protection. Each region is paired with another region within the same geography, thus making a regional pair. So, we've talked a lot about regions and geography in Tech, but what exactly does this mean? A region is a collection of datacenters that have been deployed with a low-latency defined perimeter and are linked to a dedicated low-latency network. Amazon has more global regions than any other cloud provider. Azure gives customers the flexibility to deploy apps wherever they need to. Azure is generally available in 52 regions around the world, with plans to expand further. So think of an Azure region as a data centre within a specific country. Geographies, by comparison, is a discrete marketplacetypically containing two or more regions. This preserves data residency and compliance boundaries. So for example, in Europe, you would have North and West Europe as regions and Europe as the geography. Within each region, you then have availability zones. Availability zones are physically separate locations within that region. Each availability zone is made up of one or more data centres equipped with independent power, cooling, and networking. It's important to understand that not all regions support availability zones. For example, at this moment in time, the UK West region does not support availability zones. Bringing all this information together, we can easily map out the various different options that you can have. These slides will be available for you to download and use to help you with your revision in the next lecture. Let's go ahead and actually create a storage account and run through the various options that you have when configuring them.

11. Creating Storage Walkthrough

Now that we've gone through the theory, let's go ahead and create a storage account and then have a look through it. If you followed one of the earlier lectures where we enabled the cloud-based console, you'll already have a storage count that was created as part of that. We're going to ignore that for now because that's all of your defaults that we don't really want to look at at the moment. So let's go ahead and create a new resource. Click the Create a Resource Button and search for storage. We want a storage account. Make sure you are choosing one for Microsoft. Then simply go ahead and click "Create." As usual, we need to go ahead and create a resource group, and then we need to give it an account name. As I said earlier, this name has to be unique across the Azure platform; choose location, and then choose the performance type because this is going to be general purpose, and we are not going to be using it to store virtual machine disks. For the account Kind, we'll stick with the standard. I'm going to go for a general purpose fee too. Finally, for the replication, I'm going to start with locally redundant storage. So if you remember from the previous lecture, that meansthis is just going to replicate itself within the samedata centre and we won't get regional or zone redundantstorage but we're going to go and change that later. Finally, we're going to leave the access tier at "Hot," which means it's slightly more expensive to store the data than, for example, "Cool." However, there's no access cost for the actual data. We're going to leave it as a public endpoint for now. We'll go and change that later. These options are where we can start to change the security settings. More often than not, it's best to leave these as the default. So for example, the security by default isgoing to require all access over Https. You could disable that, but it's not recommended if you're going to have large GIR file shares. So for example, over five terabytesthen you need to enable that. We also have an option called "Blob soft delete" that enables you to recover blob data when it's been accidentally deleted. Kind of think of it like a trash camera. By default this is disabled, but you can go ahead and enable it. Obviously there will be cost implications because when you delete things, they might disappear from storage, but they are actually still using storage in the undeleted space. Finally, we can choose to use Data Lake Storage Gen 2. We'll be covering data more later on. For now, leave that disabled and let's go. Next, as with the sources, we can tag values again. We're not going to worry about that just now, and we're just going to go ahead and say review and then create. So now that it's been created, we can go to our storage account and take a look through some of the options. So the first thing we'll look at is down the left-hand side, where we have the Blob service, the file service table, and queue services. So before we can actually start using these for the relevant service we want, we need to either create the shares or the tables, and so on. So let's go through each of them in turn. For the Blob service, the first thing we need to do is create a container. To do that, we simply go to Containers and click Create Container. When you create a container, you have the option of choosing what kind of access you want to give it to.By default, it's private, non-masked access. You can, however, set blob anonymous read or container, which grants containers and blobs read access. This basically means that if you want all data to be secure so that you must authenticate well to access it, you must be private. If, for example, you wanted to provide a PDF file that you would want to share with your users on the internet, then you could set it to "Blob anonymous Read Access," which means you would then be able to just send a URL direct to the PDF document you've uploaded there. Alternatively, you could share an entire container with the public so that you could supply lots of documents and allow users to browse them. It is case sensitive, and in fact, you've got to make sure that the name is all in lowercase. So go ahead and click OK and thenonce that's there, once you're in there youcan upload documents by clicking the upload. At the moment, the authentication is by default set to an access key. You can actually switch on ad authentication, but we'll look at another way to show that later. For now, let's just go back to the storage overview. If we first go back to containers, you'll then see our container there again. But first, let's return to File Sharing. We have our File Share service, but there's nothing in there. So the first thing we need to do is actually create a file share again. just like with containers. Go ahead and click Create File Share again. Give it a name, and if you want, set a quarter for the maximum amount of gigabytes you want to upload to be able to store in that container before we prevent you from storing anymore. If you don't want quarters, simply click Create again. Once we go into that, we have similar options where we can upload files or even add directories. We can also edit the quarter, delete the share, or use this connect button. If we click this connect button, it will give you examples of how you can connect the file share using Windows, Linux, or macOS. Now let's look at the tables. As before, to start using tables, we have to click "Add a table," give it a name, and click OK. And then once the table is there, it is ready to use. Do note that here we get to see the URL that you will use or can use to access it over the internet using an access key. The final one we'll look at is queues, and just like everything else, before you can use a queue you have to go ahead and create one, and again, this is similar with the tables. You then get a queue name. You would programmatically access it.

12. Using Storage

Now we have our storage account set up. Let's have a look at how we can use it by uploading and downloading files and some of the other options that we looked at earlier. First of all, we'll look at containers. So we created ourselves a container at the very startand in that container we created a folder called Upload. So if we go into that folder, at the moment it's completely blank. So the first thing we're going to dois we want to upload a file. So as this is a blob, I'm just going to upload an image. Quite simply, we can go to this upload link here, go and look for a file to upload, and simply upload it. Once that's uploaded, it will then appear in our container. If we click on it, we can see lots of information about the file: when it was uploaded, its size, and so on and so forth. You'll also note here this URL. If we copy that URL and then go to a new browser window and paste it in, we'll get an error that says "Resource not found." And the reason why we're getting an error is because when we first created this container, we created it as private. So the first thing we can do is change the access level. If we change it to blob, it will give us read access directly through the URL without needing to be authenticated. Container-level access will also give you the ability to list the contents of a container, whereas individual blobs just give you individual access to the files. The idea is that with just blob access, you need to specify URLs for each individual file that you want people to access, whereas with container access, they would be able to enumerate it themselves directly. For now, we're just going to go for blob and say, "Okay, now if we go back to our main URL, then hit refresh," we can see the file by going directly to the URL. So instead of changing the access level, another way we can lock access to files is via the firewalls. So if we go down to the firewalls and virtual networks pane, by default we can see access allowed from all networks, including the Internet, which is why we can access it directly through the URL sometimes, wherever you want to lock that down. And you may want to either allow access only to virtual networks within your subscription or specific IPS if you want to allow access to, for example, a service provider or a particular network. So, for example, we could go add an existing virtual network. You could go in and select the virtual networks that you wanted and the subnets, and then click Enable. This would then give any virtual machine or service running within those VNets access to that virtual network and that factor that blob storage account). Alternatively, you can add individual IP addresses. That's what we'll do. Now before we do that, let's just click save. So we've selected networks but locked it down to only select networks without adding anything. So if we go back to our URL and hit refresh, we will now get an authorization failure. Go back in and add our clientIP address, hit save, and now our IP is listed. If we go back and hit refresh, it may take a few times for the firewalls to update and we can access the file again. When we were adding existing virtual networks, you'll have noticed that one of the options you were trying to enable was this service endpoint status. When you add a virtual network and click Enable, what it actually does is create this service endpoint against the storage account. And that means that any services that are on our virtual networks trying to access the Blob storage will actually go over the private network, i.e., directly within Azure itself rather than going out to the outside world. And when you're in hybrid scenarios and you want to secure access to services, this is a great way to ensure that traffic will always be local. Before we move off Blob storage, I'm also going to just enable custom domains. So, initially, we stated that your URL would be your account name, blob code@windows.net. You might want to use your own URLs. So what you can do is go ahead and add a custom domain. The first thing you need to do at your DNS provider is create a CNAME record that points to your full Windows URL, which is what I've already done. It will be slightly different with every kind of DNS provider you use, so you'll have to check their specific instructions. However, once you've got that set up, you can go ahead and enter your custom URL here and then click Save. And now you'll be able to access the files using your custom URL. However, because it's going over Https and we'venot got any certificates uploaded, it will comeup warning you the site is not secure. We can override that and get back to our URL. Now let's look at the usual file service. As before, we created an upload folder, and again through the web browser, we can upload files, add extra directories, and so on as we did before. The other way we can connect is directly through drive mappings. If we click this Connect button here, it will bring up example scripts of what we need to do in order to create that drive mapping. So if we go ahead and copy that to our clipboard, then we need to go into PowerShell and simply paste in that command, and then that creates the drive mapping. So we can now see we've got A on my PC; I've got a drive mapping to the uploads folder. What I can do now is simply drag files into that, go back to our browser, and hit refresh. And now we can see our file has been uploaded. Again, if we click on that, we get a URL. However, the URL is more for programmatic access because in order to access it, it requires certain tokens and authentications to take place. And therefore, if you try to access it directly, you will get an error. Therefore, to access that, you would need to programmatically access the URL by providing the correct header information. Alternatively, through the web browser, we can hit the download button, and that would allow us to download the file directly through the browser. Finally, we're going to use something called Storage Explorer. So using the web browser is fine; however, it's not always the easiest and most direct route. So if you go and do a search for Azure Storage Explorer, you'll find the official Microsoft website (make sure it's on Azure.microsoft.com), and this is a tool that you can download for most operating systems, including Windows, macOS, and Linux. Go ahead and download that file. Then, once it's downloaded, go ahead and run it, accept all the defaults, and then launch the Storage Explorer on the first run. The first thing it's going to ask you to do is to add an Azure account. So go ahead and click Next, and it will ask you to sign into your account. Make sure you're using the same account that you used to log into your portal, click Apply, and now you'll be able to actually go into your account and see your various storage accounts and even discs that you've got set up. So if you go into our storage account that we've just created, we will see our blog containers again with all our files. And again, we can download and upload files through this. same with file shares. And again we can do all sorts of management, upload and download files, create new folders, and so on. However, we can also access our tables and queues. So in the tables, we created a single table called Products. And because you can't normally access this via the web browser, you can always see the information that's in there. We can actually use this to upload and create records. So with our product table selected, we can click Add. And we're never creating entities within table storage. We at least need a partition key and a row key. So a partition key allows you to group records. So for example, if you had a list of products, a partition key might be a product category; therefore, they have to be unique within a partition. However, multiple records within that partition will obviously have the same value. The next is that we need a row key, and these row keys have to be unique for each individual record. We can also go ahead and add additional properties, define what type of value it will be, and click Insert. And then now that record appears in our products table, because this is no sequel. One of the greatest benefits we have is that you're not restricted to the records that you use. So if we were to add another record, we would keep the partition key the same, but have a new role key and a different product name. But now we could add another property, save the price, which we will set to an integer, and click Insert, and it will quite happily allow us to insert another record with a different table structure. In this way, we can construct very dynamic records without having to predefine the columns ahead of time.

Go to testing centre with ease on our mind when you use Microsoft Azure Architect AZ-303 vce exam dumps, practice test questions and answers. Microsoft AZ-303 Microsoft Azure Architect Technologies certification practice test questions and answers, study guide, exam dumps and video training course in vce format to help you study with ease. Prepare with confidence and study using Microsoft Azure Architect AZ-303 exam dumps & practice test questions and answers vce from ExamCollection.

Read More


Comments
* The most recent comment are at the top
  • Gaurav
  • Canada
  • Jan 07, 2022

I just passed my exam with a premium dump on date January/6/2022

  • Jan 07, 2022
  • Jun67!
  • United Arab Emirates
  • Jan 05, 2022

Is premium dump still vaild?

  • Jan 05, 2022
  • unknown
  • United States
  • Nov 09, 2021

Passed with 753. Maybe 5 new questions and about 2 variations of ones on here.

  • Nov 09, 2021
  • Hussein
  • Saudi Arabia
  • Jun 19, 2021

Pass last week with 735 premium exam study and some new 5 Questions

  • Jun 19, 2021
  • basssman
  • Netherlands
  • Jun 10, 2021

Passed today with a score of 732 with only studying premium dump file.

So study well and not only the premium dump file but also understand what you are studying!

Good luck!!!

  • Jun 10, 2021
  • J
  • United Kingdom
  • May 27, 2021

Passed today with 753 using Premium. Some new questions, so make sure you cover all bases.

  • May 27, 2021
  • Dante Pourrrrrs
  • Apr 23, 2021

Premium Dump 155Q, 100% valid, passed today (23Apr21) with 881.

  • Apr 23, 2021
  • Roger
  • Apr 07, 2021

Premium Dump 155Q, 100% valid, passed today (07Apr21) with 849.
some new questions 3 or 4
Tks Guys.

  • Apr 07, 2021
  • beto
  • Peru
  • Mar 24, 2021

Passed today! 100% of the questions in the exam, just verify answers with ms docs.

  • Mar 24, 2021
  • Gem
  • United Kingdom
  • Mar 02, 2021

The Premium dump is 100%, some new questions. passed with 835.

  • Mar 02, 2021
  • Michel
  • Switzerland
  • Feb 16, 2021

I used the paid version too, passed and most questions are a match.

  • Feb 16, 2021
  • Kevin
  • Germany
  • Jan 25, 2021

I used the paid version, passed and most questions are a match.

  • Jan 25, 2021
  • Dennis
  • Netherlands
  • Jan 22, 2021

Just passed the exam with the premium dump file. No labs

  • Jan 22, 2021
  • Martin
  • Netherlands
  • Jan 20, 2021

I used the Premium files and lot of questions are in. I passed with 800.
A lot of questions : docker, backup managed id.

  • Jan 20, 2021
  • Jorge
  • United Arab Emirates
  • Jan 18, 2021

My friend told me that this is the best site with exam dumps, so I decided to try it out. I checked the other platforms, and this one seems to be more legit than others. That is why I decided to go for the premium package. I hope it will work for me. Wish me good luck.

  • Jan 18, 2021
  • onlineguy
  • Canada
  • Jan 15, 2021

passed using only premium Q's, first 133 in dump. did not study any az300 legacy q's.

  • Jan 15, 2021
  • L
  • Spain
  • Jan 12, 2021

Passed on 29 December based on the premium file, two new questions.

  • Jan 12, 2021
  • Alferd
  • Poland
  • Jan 10, 2021

Passed Today. Premium Valid. 2 New questions. 823.

  • Jan 10, 2021
  • Sabouri Sabouri
  • United States
  • Jan 08, 2021

I passed the AZ-303 test today with 820/1000. There were many questions about elastic pool, managed identities, and Key Vault. I studied by working through all the modules associated with the test, by exploring the Microsoft website and these braindumps.

  • Jan 08, 2021
  • Artem
  • Poland
  • Jan 06, 2021

The dump is valid, passed with 805 today. Prepared with both 303 and 300, but I didn't recognize any questions from 300, but there were some new ones.

  • Jan 06, 2021
  • Mohammad
  • Saudi Arabia
  • Jan 05, 2021

premium dump valid, I passed the exam today

  • Jan 05, 2021
  • Artem
  • Canada
  • Dec 28, 2020

Indeed, this dump is valid, because I passed the AZ-303 exam with 805 today. To be honest, there were some new questions during the test, but it is all because of the various deliveries. I can understand that it is quite random, but anyway all of the other questions were the same. So, they helped me a lot. Thanks!

  • Dec 28, 2020
  • TCR
  • Australia
  • Dec 28, 2020

Just passed in Australia. I used the premium file. A few new questions, make sure you study as well. Good luck!

  • Dec 28, 2020
  • DamnFromBritLand
  • United Kingdom
  • Dec 18, 2020

Just passed. Must consult Premium as well as Legacy Az-300. Practice every question and understand limitations based on RG, regions etc. Almost lost my brain. LOL.

  • Dec 18, 2020
  • juni5juni
  • South Korea
  • Nov 19, 2020

I pass the exam and valid dump.
Some question is new.

  • Nov 19, 2020

Add Comment

Feel Free to Post Your Comments About EamCollection VCE Files which Include Microsoft Azure Architect AZ-303 Exam Dumps, Practice Test Questions & Answers.

SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.