• Home
  • Google
  • Professional Data Engineer Professional Data Engineer on Google Cloud Platform Dumps

Pass Your Google Professional Data Engineer Exam Easy!

100% Real Google Professional Data Engineer Exam Questions & Answers, Accurate & Verified By IT Experts

Instant Download, Free Fast Updates, 99.6% Pass Rate

Professional Data Engineer Premium Bundle

$79.99

Google Professional Data Engineer Premium Bundle

Professional Data Engineer Premium File: 205 Questions & Answers

Last Update: Jan 13, 2023

Professional Data Engineer Training Course: 201 Video Lectures

Professional Data Engineer PDF Study Guide: 543 Pages

Professional Data Engineer Bundle gives you unlimited access to "Professional Data Engineer" files. However, this does not replace the need for a .vce exam simulator. To download VCE exam simulator click here
Google Professional Data Engineer Premium Bundle
Google Professional Data Engineer Premium Bundle

Professional Data Engineer Premium File: 205 Questions & Answers

Last Update: Jan 13, 2023

Professional Data Engineer Training Course: 201 Video Lectures

Professional Data Engineer PDF Study Guide: 543 Pages

$79.99

Professional Data Engineer Bundle gives you unlimited access to "Professional Data Engineer" files. However, this does not replace the need for a .vce exam simulator. To download your .vce exam simulator click here

Google Professional Data Engineer Exam Screenshots

Google Professional Data Engineer Practice Test Questions in VCE Format

Google Professional Data Engineer Practice Test Questions, Exam Dumps

Google Professional Data Engineer Professional Data Engineer on Google Cloud Platform exam dumps vce, practice test questions, study guide & video training course to study and pass quickly and easily. Google Professional Data Engineer Professional Data Engineer on Google Cloud Platform exam dumps & practice test questions and answers. You need avanset vce exam simulator in order to study the Google Professional Data Engineer certification exam dumps & Google Professional Data Engineer practice test questions in vce format.

Compute

8. Google Container Engine - Kubernetes (GKE)

Here is a question that I'd like us to consider: is a VM that's a virtual machine more lightweight than a Docker container? Is this statement true or false? Let's try and keep the answer to this question at the back of our minds as we discuss containers, and then we'll get back to the answer at the end of the video. Google Compute Engine represents the Internet as a service option. Let's go now to the use of containers. Let's say that your website or Web application has a lot of dependencies. Maybe, for instance, you run a fairly typical architecture in which you have a database server sitting behind a layer of Web logic. This requires you to run separate web server and database instances, and you also want separate containers to isolate each from the other. This is starting to approach a service-oriented architecture, or, taken to its logical conclusion, you might even have a large number of instances communicating via RestAPIs using something known as microservices. Now, you could always go down the infra-ASA service route in this instance as well. You could just get a large number of ComputeEngine VMs, run Kubernetes on a bunch of them, and effectively create your own container environment. But if you'd like to save yourself the trouble of managing your own container cluster, just make use of the Google Container Engine. Let's make sure we really understand what's going on here, because containers aren't really something that everyone has worked with. Let's start by defining a container. This definition comes from the website of Docker, which of course is a market leader in container technology. A container image is a lightweight, standalone executable package that also encapsulates everything that is needed to run it. This would include code, the runtime system, tools, system libraries, and settings. This is a pretty comprehensive definition. Let's parse this and, in particular, contrast this with VMs. This cute little image from the Docker Website really does a good job of explaining it. We could have a bunch of containers running on top of the same OS kernel. Each one of those containers effectively has its own little environment executable files as well as the entire runtime set up.Each one of those containers can be created by using software such as Docker. Docker can be thought of as a build kind of toolwhich takes in your code and outputs a container which canbe ferried around and run in its own little sandbox. Containers differ from virtual machines in some important ways, which we'll get to in a moment, but the basic idea is fairly similar. Individual containers are often in the Docker file format. These containers tend to run in a container cluster, which is managed using software known as Kubernetes. Let's now understand the differences between containers and virtual machines because this is very relevant to your choice of Google Compute Engine or Google Container Engine. Here is a great schematic representation, which compares containers with virtual machines. Once again, these are from the Docker website, which of course approaches containers from a rather specific and dark standpoint, which is perfectly okay in the case of containers. Docker lies right below our individual containers lies Docker.Now, Docker is an open platform that allows folks to build and run distributed applications in the form of containers, as we already realized. The crucial bit, though, is that Docker runs on top of the host OS. This means that each individual container does not abstract from the operating system. In contrast, in the case of VMs, we can see that each virtual machine has its applications and libraries. It also has a guest OS. And beneath each of the virtual machines lies the Virtual Machine Monitor, or Hypervisor, as it's known. This is a piece of software that would be created by a company like VMware, for instance, and makes sure that one or more virtual machines are able to run on the host machine and interact with the hardware and other infrastructure. Compute engine instances are virtual machines like those on the right. Container engine instances are container instances like those on the left. This is an important distinction. If you chose to use the Google Cloud Platform for containers, you would be running a container cluster. Schematically, a container cluster would look like this. You would have one supervising machine. This is the block at the bottom of Running Kubernetes that's known as the master endpoint. Think of this as the Hadoop Cluster Manager. This would be in touch with a number of individual container machines, each running containers, and each talking or communicating with the master using a piece of software known as a cubeless. In the world of the Google Container Engine, that coordinating machine is known as the master node. It runs Kubernetes. Each of the other machines in the cluster is known as a node instance. Each node instance has its own qubit that talks to the master, atop which runs a pod. And inside each pod are multiple containers. This is important. The master talks to node instances, which in turn contain pods, and those pods contain the containers. In effect, containers add one further level of indirection to your code. This makes them different from virtual machines because they virtualize the operating system. For instance, in the block diagram we saw previously, Docker was acting as a proxy between the container and the OS. in a virtual machine. On the other hand, every VM has its own operating system, which talks to the hypervisor. And that hypervisor, or virtual machine monitor, is in effect virtualizing or abstracting the hardware. Now it's pretty clear that a virtual machine needs to haul around its operating system within it. This will make it less portable, as well as larger in size and slower to boot. In general, virtual machines are just more heavyweight than containers because, of course, the virtual machine OS is going to take up a significant amount of space and resources. We'll have a lot more to say on the use cases of containers and VMs in just a moment, but do understand these advantages of containers. Containers are a much newer technology than virtual machines. If you did decide to host your website or web app using Google Container Engine, your need for a large internal DevOps setup would be largely mitigated. You still could use something like Jenkins if you choose to, but to a large extent, you can get all that you need accomplished using Kubernetes. Jenkins can be set up to work with the Container Engine quite seamlessly. You could use Jenkins for CI and CD and just make use of the Container Engine functionality for other stuff like packaging and deployment. As with both Container Engine and the App Engine, which we get to get to, you make use of Stackdriver, which is a part of the GCP suite, for logging, monitoring, and so on. Containers have some real advantages for this particular use case of web serving. The first of these is componentization. Here you might in your web app chooseto have containers separating the different components. For instance, the Web server and the database need to be isolated from each other. We don't want your application layer directly touching the database by passing the web server logic. All of this stuff leads to more and more services, which might be communicating with each other in the form of micro services.This is a service-oriented architecture. Containers have really developed and gained popularity for this specific use case. The other advantage, which we've discussed at some length, has to do with portability. Because the container is self-contained, it has everything needed for it to run. Barring the OS, of course, the container is easily shipped around. You're basically shipping your application as well as its dependencies in one compact bundle, and the last has to do with speed of deployment. In contrast to virtual machines, containers do not need to take along their operating system, and as a result, it's possible to build a system from a set of image definitions, which can also be encapsulated very nicely using a container registry. The Google Container Registry is a great example of additional advantages that come with working with containers on GCP. The GCP docs are not shy about reminding us about these advantages, the first of which has to do with orchestration. Remember that Google's Container Engine uses Kubernetes in the form of clusters, so that there is one master entity that is controlling all of the individual node instances. Kubernetes in that master instance significantly reduced the amount of orchestration required. Instead of administering individual containers or even the individual pods, or creating or shutting down each container manually, you can now do all of that through the Container Engine using various configuration files. The second has to do with the registry mechanism, which we just spoke about. It's possible to register container images and then, whether it is on GCP or anywhere else, pull images out of that registry. This is a great code reuse mechanism, and the last advantage is that of flexibility. As we shall see, App Engine, which is yet another compute option, can seamlessly interact with containers as well as Compute Engine VMs. In fact, one of the two most common and important flavours of App Engine instances is implemented using containers. That's the standard environment. So it's pretty clear that on GCP, when you make use of containers, you can also make use of individual VMs, or even of other pass options similar to Hiroko or engineered. Let's return to the question we started this video with, and this statement on screen is false. A virtual machine is definitely more heavyweight than a Docker container. We've discussed at length why this is the case. In a nutshell, it's because a VM contains within itself an abstraction for the OS. A Docker container does not. A Docker container only has the app and whatever other code or resources are required for that app to execute; it does not include the operating system. For this reason, virtual machines tend to be several orders of magnitude bigger than containers, and they are also easier to deploy and get started with.

9. More GKE

Here, as usual, is a question that I'd like you to ponder over as you go over this material. There's a statement on screen now. The statement says scaling a container cluster is hard work. If you really want auto-scaling, you've got to use App Engine. Is this statement true or false? Recall that while working with Compute Engine instances, we had to choose from a menu of storage options, which included persistent disks, which could either be ordinary or SSD, local SSD disks, and Google Cloud Storage. Storage options on container engine instances are not all that different. There is, however, one important subtlety. This has to do with the type of attached disk. Recall that when you use a Compute Engine instance that comes along with an attached disk, that link between the Compute Engine instance and the attached disc will remain for as long as the instance exists. The same disc volume is going to be attached to the same instance. This will be the case even if you detach the disc and use it with a different instance. However, when you're using containers, the on-disk files are ephemeral. This is something important that you have to keep in mind. If a container restarts, for instance, after a crash, whatever data that you have in your disc files is going to be lost. There is a way around this ephemeral nature of storage options, and that is by using a persistent abstraction known as GCE persistent disk. The name isn't important. The principle is important. If you are going to make use of container engine instances and you do want to have your data be ephemeral, if you want it to remain associated with your containers, you've got to make use of this abstraction. Otherwise, your disc data will not be persistent after container restarts. Load balancing is yet another area where working with container engine instances is rather more complicated than working with Compute Engine VMs. With the container engine, you can make use of network-level load balancing. This works right out of the box. However, keep in mind that the higher up the OSI protocol stack you go, the more sophisticated your load balancing becomes. Extending that logic, the most sophisticated form of load balancing is going to be HTTP load balancing. This is something that does not work very simply with container engines. If you want to use HTTP load balancing with container instances, you're going to have to do some interfacing of your own with the Compute Engine load balancing infrastructure. Now that we've understood some of the pros and cons of working with containers relative to virtual machine instances, let's turn our attention to the architecture. the implementation of container clusters in GCP. As we've already discussed, these have a master node running Kubernetes. This controls a bunch of node instances, which have qubits, and inside those qubits are individual containers. Kubernetes is an open source container cluster management piece of software created and maintained by Google. You sometimes see folks talking about the pros and cons of GCE and GKE. By GCE, they mean the Google Compute Engine. And by GKE, they mean the Google Kubernetes Engine. So in other words, they mean Kubernetes is a shorthand for all of the container functionality available on GCP. Kubernetes is the orchestrator, which runs on the master node, but it really depends on the cube lists, which are running on the pods. In effect, a Pod is a collection of containers, all of which share the same underlying resources. For instance, they all have the same IP address, and they can share disc volumes. There may be a web server pod, for instance, which would have one container for the server and then containers for the logging and the metrics infrastructure. Pods are defined using configuration files specified either in JSON or YAML. Each pod is, in effect, managed by a qubit. That qubit is in turn controlled by the Kubernetes master node Kubernetes.This schematic representation also hints at how container clusters are really architected in the GCP world. A container cluster is, in essence, a collection of Compute Engine VM instances, each of which runs Kubernetes. And in this way, we can see that each of the compute options available is actually based on the same underlying infrastructure, which of course comes as no surprise. When we talk about App Engine instances, we shall see how one of them is implemented using Compute Engine instances and the other is really implemented using containers. The VM instances that are contained in a container cluster are of two types, as we have already seen. There is one special instance, which is a master. This is the one that runs Kubernetes, and the others are node instances, which run Qubits and manage pods of containers. As you've already seen, each of the node instances has a rather similar look. It's managed by the master. It's going to be running services that are necessary to support the Docker containers, which contain the actual code that's being executed. Each node runs the Docker runtime and also hosts the Qubit Agent. That Qubit Agent is going to manage the Docker runtime and also make sure that all of the Docker containers that are scheduled on that host are running successfully. Let's also understand the role played by that master endpoint in a little more detail. It runs the Kubernetes API server. This is in charge of servicing Restrequests for scheduling pod creation and deletion from wherever they come in. This has to do with auto scaling. More on this in just a moment, and also on synchronising information across the different pods. Within your container cluster, you might want to have different instances that are similar to each other. Each of these is known as a node pool. So a node pool is a subset of machines within a cluster that all share the same configuration. As you might imagine, the ability to have different node pools helps with customising instance profiles in a cluster, which in turn can become handy if you frequently make changes to your containers. You should be aware that it's possible to run multiple Kubernetes node versions on each note pool in your cluster and have each of those node pools independently listen to different updates and different sets of deployment notebooks, which is a pretty powerful way of customising the individual instances within clusters. The GCP also has its own container builder. This is a piece of software that helps execute container image builds on the GCP's infrastructure. Basically, this is very similar to the creation of a JAR file or any other kind of executable metadata archive. It will import source code from a bunch of different repositories. These could be on cloud storage, by the way, and then go ahead and build them to your specifications to produce artefacts that could be Docker containers, Java archives, and so on. Another handy bit of functionality, which we alluded to previously, was the Container Registry. This is a private registry for your Docker images. The Container Registry is not all that different from a registry on a Windows machine. If you're familiar with those, think of it as a way to access, that is, push, pull, or manage container images from any system, whether it's a Compute Engine instance or your own on-premise hardware, through secure httpendpoints In a sense, this is a very controlled, regulated way of dealing with container images. You should also be aware that it is possible to hook up Docker and the Container Registry so that they talk to each other directly in this way by using the Docker command line. That's the Credential Helper Tool; you can authenticate Docker directly from the Container Registry. And because this Container Registry can be written to and read from pretty much any system, you could also use third-party cluster management or CI CD tools, even those that are not on the Google Cloud Platform. Auto scaling is a basic and fundamental selling point of any cloud service, including GCP. And of course, the Container Engine has its own version. For automatic resizing of your clusters, you need to use something known as the cluster auto scaler.This will periodically check and optimise the size of your cluster, either increasing or reducing the number of instances. Let's say, for instance, that your container cluster is larger than it needs to be. There are nodes that do not have any scheduled pods; those nodes will be deleted by the cluster auto scaler. On the other hand, if your cluster container is too small and if you have pods that are facing inordinate delays before they are run, then the cluster auto scaler will add nodes and scale up your cluster. Notice that auto scaling is not an option that was available to us under the ComputeEngine choice, and this is why one might go with Container Engine or Compute Engine. Some stuff is already taken care of for us, but we still retain some degree of control, and that, in a nutshell, is when you'd go with containers and use the Google container engine. Let's now return to the question we posed at the start of this video. Now we know that this statement is false. It is entirely possible to deploy auto scaling on a container cluster, so it is not limited to the app engine; as we saw with the cluster auto scaler, we can delegate the responsibility of either adding or reducing nodes in our container cluster to the GCP. The cluster auto scaler will check if pods have been waiting for 2 hours and then go ahead and add container cluster instances. If, on the other hand, there are underutilised nodes, it will go ahead and delete them.

10. Lab: Creating A Kubernetes Cluster And Deploying A Wordpress Container

In this lecture, we'll set up Docker containers in the cloud using Google Compute VM instances. We will also see what you need to do to expose your Pod to external traffic. Let's see how we can set up a Kubernetes cluster running Google Container Engine on Cloud Shell. Let's set up some default config properties before we create our container cluster. We'll set the compute zone to be US Central 1A by default and the compute region to be US Central 1. Let's create a cluster running the GoogleContainer engine using the command line. We use the G Cloud container cluster create command. The name of our cluster is My First Cluster, and we want this cluster to have exactly one node. We are going to create a single-node cluster. This will be set up in the default project, default zone, and default region that you've specified in your configuration. That's us. Central, one A. Notice the confirmation message on the command line. We have one node in this cluster, and its current status is running. Because clusters are simply a bunch of VM instances running together in some configuration, you can use the G Cloud Compute Instances list to see the list of clusters and VM instances that you have running. Now that we have a cluster up and running, we'll deploy WordPress to it. WordPress is simply an application that allows you to create and run static websites very easily. We'll do this by deploying a WordPress Docker container on our cluster. This WordPress Docker container is publicly available as an image, and you can access it via an image equal to both WordPress. We want this container with the WordPress application to run on port 80. Notice that we use the Cube CTLrun command, which is the command line interface for working with Kubernetes clusters. This TOO image, which is an out-of-the-box WordPress image, contains everything you need to run this WordPress site, including a MySQL database. That's the wonder of containers. It combines all of the applications that you need to run as a group. Having deployed this container to your cluster, what you created on your cluster is a pod. A pod is basically one or more containers that go together as a group for some meaningful deployment. In this case, for our WordPress container, there is just one container within this pod. You can use the Cube CPL Get Pods command to see the status of the pods that you have running. Notice that we have one Pod that starts with the term "WordPress." There are currently zero out of one pods running, and the status is container creation. It's not ready yet. You can run this command a couple of times to see how the status updates. At some point, you should see the status as "equal to running ready"; it also shows one out of one. Your pod is now ready with your Docker container running successfully on it. when you first create a pod. Its default configuration only allows it to be visible to other machines within the cluster. We don't want only the internal machines to access this pod. We want it to be made available for external traffic. And this we do by exposing the pod as a service, so that external traffic can access your WordPress site. Cubectl Expose Pod is the command that you use for this. Specify the name of your container, which is WordPress, followed by a whole bunch of those numbers and letters. The name that you want to specify for your container is WordPress, and you want to set up a load balancer for it. This load balancer creates an external IP address for this podcast to accept traffic. This is what helps your WordPress site be available to external customers as a service. Cube CTL Expose creates a service for the forwarding rules for the load balancer and the firewall rules that allow external traffic to be sent to the pod while exposing the pod. We named our service WordPress, and you can now use this name in order to check the services that we have available. You can use the Cube cpl describe services command followed by the service name to see information about your WordPress service. At the very bottom here, under the title "Events," you can see the series of events that have occurred on this cluster. The very first thing that we did was to create a load balancer. At this point in time, the creation of the load balancer has not been complete. It's still in the creation stage. Run the command to describe the services once again, and you'll notice that a couple of things have changed. The load balancer has finished its creation and now has an ingress IP address that you can use to direct external traffic to. The second thing to note down below in the events is that the load balancer has now been created. This was added as a new event. Let's use the load balancer ingress IP address to view our WordPress site running on our Kubernetes cluster. There you will find the startup page for your WordPress site. If you've created a site on WordPress before, they should be very familiar to you. Click Continue, and you can then walk through the steps of actually creating a WordPress site if you want to. We won't be doing it here in this video right now because we are more interested in the cluster itself rather than the WordPress site. Switch back to our Compute Engine VM instances. Page Click through to the cluster and you'll see that there is some activity in the graph at the very end. Our single-node Kubernetes cluster has shown some activity because we deployed a WordPress stock image to it and launched a site. Explore the additional settings and config values that are associated with this Kubernetes cluster. Highlighted on screen is some custom metadata about the cluster itself. In this video, you learn that you need to run an explicit cube CPL command in order to expose your pod to external traffic. You also need to set up a load balancer where external traffic can be directed.

11. App Engine

Here is a question that I'd like you to keep in mind. Let's say that you would like to use a specific version of an operating system. Can you do so using the App Engine standard environment? We know for sure that this is possible using a Compute Engine instance. The question is, can you do this using an App Engine standard environment? Think about this, and let's revisit it at the end of the video. Using virtual machines and containers requires a fair amount of low-level nitty gritty.What if you decide that you don't want to deal with any of that? What if you'd like to use pretty standard platforms as a service, web development tools, or technologies such as Hiroko or Engineer? Both of these are pretty classic web development platforms, which you could choose to leverage. And the GCP equivalent of doing that is making use of App Engine. Using App Engine, you could set up a pretty complex web app with very little effort. You just focus on writing the code. For instance, you could support different clients, whether they are Android or iOS or even desktop apps. You would have load balancing in front of your app. You use Google's Cloud DMs app, and Engine uses Memcache and whatever storage technologies it needs on the back end. For instance, it might have cloud storage or cloud backup or cloud data storage. Any of these things piled up one after the other, culminating in yet another App Engine feature, this time in the auto scaling. This would be a batch app at the very back end of your architecture. And this is how hosting on Google App Engine would work: We've now spanned the entire range. We've crossed the gamut of options available for hosting a web app or a website on GCP. Hopefully this gives you a good sense of the differences between just dumb storage, dumb storage with some added bells and whistles via Firebase, and then the use of the compute engine, that is, virtual machines in a container engine, and finally the app engine. Because the App Engine is really serverless, it's probably more correct to refer to App Engine instances as App Engine environments. But really, these terms are used interchangeably. There are two important types of environments: standard and flexible. The app engine standard environment is preconfigured with some software like Java 7, Python 2.7, Go, PHP, and so on. This really is a container. This is a standard environment in which you can't change a thing. So again, when you run your code in an App Engine standard environment, what you're really doing is making use of a standard container. In contrast, there is also a flexible environment. Flexible environments offer a range of choices. You can tweak stuff and customise it. For instance, you might choose to make use of Java 8, Python 3, XOR Net, or some other nonstandard environment like that. And really, under the hood, a flexible environment is nothing but a Compute Engine VM. And so we can decompose the two environments, which are made available to us via App Engine, and express them in terms of the other computing options. Again, a standard environment is really a standard container. A flexible environment is really a VM instance, and as we've already alluded to, the App Engine is serverless. You don't need to go out and ask for specific instances. There are instance classes that determine the price and the billing. Each of these has a laundry list of services, so you're paying for what you use. The App Engine standard environment, which we already discussed, is based on container instances. Remember that these, in turn, are running on top of Google Compute Engine VMs. Each runtime in the standard environment includes all of the libraries, and it's possible that, for many use cases, the standard environment is really all you need when you execute your code in the App Engine standard environment; that code is running in a secure sandboxed environment, and there's a lot more going on behind the scenes. The Google Cloud Platform is distributing all of your requests across multiple servers, scaling servers to meet your traffic demands. All of this is completely transparent to you. Your application runs within its own reliable environment, independent of the hardware, the OS, or the physical location. A standard environment is really just a standard container extending that same line of thought. A flexible environment is nothing more than a Compute instance's virtual machine. You can customise what a flexible environment looks like fairly easily. To do so, all you need to do is specify your own Docker files, which can contain your own runtime and even the OS image that you would like to use. These two environments, standard and flexible environments, arereally the bedrock of pass that is platformas a service on Google Cloud. But this is also a great place for us to understand cloud functions. These are floating bits of functionality that run in a serverless execution environment for connecting and building different services. Here your job is to use simple single-purpose functions and attach these functions to events. The events could be generated by your cloud's infrared logging or by other service events. For instance, every time your specified event is fired, your cloud function will be triggered. Cloud functions are serverless. This is important, and for that reason they should not be associated with any disc or persistent storage of any sort. Your code will be executed in a completely managed environment. You do not need to, and in fact, you should not provision any infrastructure or try to worry about managing servers. Cloud functions are written in JavaScript, and they can run in any standard Node JS runtime. These are just yet another weapon in your arsenal when you are considering different compute choices on the Google Cloud Platform. Armed with this new knowledge about the compute options, let's come back to the question we posed at the start of the video. The answer is no. You cannot customise the OS if you're using the standard environment. However, you can customise the operating system if you are using a flexible environment. To understand this, recall that an app engine standard environment is basically a container, and an app engine flexible environment is basically a compute engine VM instance. So if you think about it that way, you can't really change the OS on a container. You can change the OS on a compute engine VM instance. And that's why OS optimizations or OS changes are something you can carry out in a flexible environment. You cannot carry them out in a standard environment.

12. Contrasting App Engine, Compute Engine and Container Engine

Here is a question that I'd like you to think over as you watch this video. Let's say that you have a very high-end machine learning model, which you've implemented in TensorFlow. You would like to run this on the cloud. What compute option should you pick? Think about this, and let's revisit this at the end of the video here. GPUs refer to graphics processing units. These are high-end hardware units available for parallelizing data in machine learning apps like TensorFlow. Using a little case study involving hosting a website or a web app, we have now spanned the gamut of compute options on GCP. We started with cloud storage, then moved on to a slightly more intelligent version that made use of Firebase hosting, and then got to the three important options: the compute engine, the container engine, and the app engine. Let's generalise this conversation into a discussion of the pros and cons of these three choices. So let's set up three columns, starting with the pass option, which is App Engine, the hybrid option, which is Container Engine, or the Kubernetes engine. And lastly, the IaaS option or the Compute Engine option The App Engine is a flexible and zero-ops serverless platform. You should use the App Engine if high availability is very important to you, because you do not want to be taking charge of instances going up and down with the container engine. You don't need to manage individual containers, but you do need to work with Kubernetes, which is the container orchestration system, which is the foundation of the GKE. This still abstracts away a whole bunch of DevOps stuff, but you can't escape some of it. At the other end of the spectrum, we have our compute engine virtual machines. These run on Google's Data Center infrastructure, and you are directly responsible for their fate. It's important to ask yourself how much involvement you would like. Choose App Engine if you only want to focus on writing code and do not want to touch the infrastructure. If you want to deploy your applications very fast and separate your applications from the OS, you can consider using containers. And if you require extremely fine-grained granularity over the infrastructure, such as high-performance hardware such as GPUs or local SSDs, ComputeEngine VMs are your only option. Ask yourself: Do you really know or care about the operating system that your code is running on? If the answer is no, go with App Engine. If you don't really care what specific operating system you're using and you don't have any dependencies on it, then you can go with Container Engine. And if you have a strong point of view on OS-level choices such as the networking or the graphic drivers, or you really want to squeeze out performance, then you go with Compute Engine. Prototypical use cases for the App Engine might include websites, mobile apps, or gaming. Any type of containerized workload could be used as a back end for the Container engine. For example, if you want code that you are running on premises to work right away on the GCP and the Compute Engine, you would use that whenever you need a specific OS or OS configuration. If your architecture makes heavy use of Restful APIs, the App Engine is probably a good option for you. If you are using cloud-native distributed systems, then containers are a natural choice. But if you find that your code cannot be easily containerized, if it relies on specific existing virtual machine instances, or if it is currently deployed on premise and you want this to run on the cloud as is without containerization, then maybe ComputeEngine VMs are your best bet. The beauty of using a cloud service provider like GCP is that there is nothing preventing you from doing some mix-and-match of your own. For instance, you might use App Engine in the front end, and then you might have some other specific software, maybe a document data store or a key-value data store like Redis. and you decide to go ahead and run this on a Compute Engine VM, no one's stopping you. Or maybe you have set up something like a heavy graphics-intensive application where you do a lot of frame rendering. You might want a rendering micro service.This is a replica that will run on a whole bunch of machines or node instances in a container cluster, and then your Compute Engine VMs will hook up to that container cluster so that you're reusing both Kubernetes as well as Compute Engine VMs. Or maybe a typical use case for the variety, which we'll see when we get to the big data portions of this course. You have an App Engine web front end, then you do some transaction processing using Cloud SQL, and you also have a bunch of big data processing that you want to explicitly run on Hive and not on BigQuery. And so you make use of Compute Engine. All of these mix-and-match options will work just fine for you. That gets us to the end of this module, in which we covered a lot of ground on the compute options available in GCP. So let's summarise all of that at the risk of belabouring the point. There are three important options App Engine, which is the platform as a service option; Compute Engine, which is the infra as a service option; and ContainerEngine, which lies somewhere in between. By using each of these three options and also mixing and matching them according to your requirements, you are virtually guaranteed to get all of the compute capacity that you require without blocking up a whole bunch of money. Inexpensive hardware. And that really is the whole point of computing on a cloud platform such as the GCP. Let's come back to the question we posed at the start of this video. If you require a specific hardware feature, such as GPUs, Well, then the answer is clear. You want a really fine level of granularity over your hardware. You need to access the Internet as a service. And so, what you need is a compute engine instance. This is the only one of the three component options that gives you this level of granular support. Neither an app engine nor a container engine built this. So a compute engine is the way to go.

Go to testing centre with ease on our mind when you use Google Professional Data Engineer vce exam dumps, practice test questions and answers. Google Professional Data Engineer Professional Data Engineer on Google Cloud Platform certification practice test questions and answers, study guide, exam dumps and video training course in vce format to help you study with ease. Prepare with confidence and study using Google Professional Data Engineer exam dumps & practice test questions and answers vce from ExamCollection.

Read More


Comments
* The most recent comment are at the top
  • linda34
  • India
  • Oct 24, 2020

simply put, I wouldn’t have made it without these Professional Data Engineer questions and answers… I realized just days before the exam that I needed to do so much in order to pass this test… and these materials helped me just even though i had so little time! Am very grateful to ExamCollection!

  • Oct 24, 2020
  • J_Cole
  • Brazil
  • Oct 16, 2020

@gavin, these google professional data engineer vce files will certainly be of great help in your exam but i wouldn’t advise you to shut out other learning materials… it is wiser to combine varied resources which will make sure you don’t struggle at all during the exam. wish you luck☺

  • Oct 16, 2020
  • gavin
  • Singapore
  • Oct 03, 2020

will the braindumps for Professional Data Engineer exam be enough to get ready for this test?? am using them to prepare and so far they seem okay to me but want to hear some advice from those who already passed the exam…

  • Oct 03, 2020
  • Sawyer41
  • South Africa
  • Sep 26, 2020

Guys, I have some advice for candidates… using these Professional Data Engineer practice tests can save you a great deal of time and still be very effective . they saved me just when I thought I was out of time… thumbs up ExamCollection for this superb materials and for providing this treasure just for free!

  • Sep 26, 2020
  • F_Parker
  • Singapore
  • Sep 15, 2020

@jace, i used several materials but can surely say that i aced this test thanks to these dumps for Professional Data Engineer exam… they helped me build strength in all the required areas so the exam was easy for me. their level of validity is commendable! you should go for these braindumps if you want to score high, they will help you too!

  • Sep 15, 2020
  • jace
  • United States
  • Sep 08, 2020

anyone who has passed this exam recently? did you use these Google Professional Data Engineer exam dumps? if yes to what extend were they helpful?

  • Sep 08, 2020
  • nad
  • Singapore
  • Aug 25, 2020

Hi,
Need help to get practice exam for google data engineer.
Tnx.

  • Aug 25, 2020
  • Joe
  • United Kingdom
  • Aug 14, 2020

Please can you tell me where I can find the download. Many thanks.

  • Aug 14, 2020
  • Rishikesh Chandrashekhar Gaikwad
  • Qatar
  • Aug 07, 2020

Need it urgently.

  • Aug 07, 2020

Add Comment

Feel Free to Post Your Comments About EamCollection VCE Files which Include Google Professional Data Engineer Exam Dumps, Practice Test Questions & Answers.

SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.