Google Professional Cloud Developer – 1. Designing highly scalable, available, and reliable cloud-native applications

  • By
  • January 26, 2023
0 Comment

1. Section 1 Coverage

Let’s talk about the objectives here. Section one of the exam objectives. Now this is focused on designing highly scalable, available and reliable cloud native applications. Well, what the heck does that mean? Well, basically we’re going to be getting into looking at how to design our applications in performant manner. And what this really means is we need to look at, for example, how do we design an application that is going to be able to integrate into not only the cloud, but mobile environments. We’re going to be using a range of cloud native development languages that you’re going to be familiar with the course. And then when we design our applications, we need to make sure that they’re secure. We also need to understand some of the tools available in Google Cloud such as using identity aware proxy or perhaps security scanner and numerous other tools that we could use as part of the IAM structure.

We then need to know how to manage our data. When we collect our data, how do we manage it? What services do we use to help facilitate managing our application data? Do we put on cloud storage? Do we put it in cloud SQL? Do we put it on cloud data store? It’s all about the data, remember? Now, what about if we’re taking legacy apps and trying to move them over to Google Cloud? How do we take, for example, an application that was written to run on prem and port it over to Google Cloud and we’ll talk about some of the services that are commonly used as part of this. And we’ll talk about, for example, some of the best practices as part of migrating to Google Cloud. So let’s go ahead and move on. We have a lot to cover in this section.

2. 1.1 Designing performant applications and APIs

Well, this section here we’re going to cover pretty much section one. And here are the topics that we’re going to cover. We’re going to talk about designing performance apps and APIs. And as part of that, the sub objectives are around the cloud service model, portability evaluating system configs operating systems, locality of services, defining your key structure, session management and API management and then health checks as well. So let’s proceed on. There’s a lot to talk about.

3. Infrastructure as a Service vs. Container as a Service vs. Platform as a Service

Let’s talk about selecting the correct cloud service model. Now, as in any cloud platform, we want to be aware of the options that are available. There could very well be two, three different ways of basically doing the same thing to have different services utilized. So what we want to be aware of are those utilized utilization concerns. For example, do we use Compute engine or do we utilize Kubernetes or App engine? And some of the platforms, for example, such as Platform as a service, may have ease of use but may not have the flexibility that you may want. And Compute Engine and again, cloud Functions is another good example as well. Now, one of the things we want to be aware of is, again, do we work in a serverless environment? Is it a managed platform? Does it use container technology? Do we build our own virtual machines and deploy them with their own templates? Do we have flexibility? Do we have control? These are just some of the things we want to consider.

Now, one of the areas that you may want to keep, I guess perhaps memorized, is to understand what our ops now generally in the world of Google, it’s sort of similar to a shared responsibility model is the way I look at it as well. Basically it’s hands off more for cloud functions, but more hands on for Compute Engine. And again, it’s all about what you as a developer, as an engineer, as an architect, whatever your role may be in your company, you want to make sure that you understand what your responsibilities are for the platform. Do I manage the performance? Do I manage security? Do I need to worry about scalability? What about, for example, adding, for example, patches to my virtual machines? Or do I need to add bug fixes when I deploy Windows, for example? Now, if I’m deploying containers, what do I need to do there in Kubernetes engine? So you want to know what is managed by the provider and what is managed by you. Essentially that about covers this sub objective. So let’s move on to the next subojective.

4. Portability vs. platform-specific design

Portability and design. Now when we’re developing our applications to be ported to Google Cloud, we want to be aware of several things. The first is of course the application platform that we’re migrating from and to what is the application developed on? Is it open source or is it proprietary? A lot of little questions should come into your mind among the process of porting over an application that might be running on Prem to Google Cloud. Now, it’s one thing if it’s open source, it’s another thing if you’re using a vendor specific format. Then that’s probably going to be a little bit more challenging. In a lot of cases. If you’re using open APIs swagger than Swagger format, then that’s great. So you have to really consider a lot of moving parts. For example, if we’re looking at Google Cloud, we know that we could port SQL over from on Prem to Google Cloud. As long as it’s MySQL postgres SQL, we shouldn’t have much of a problem. In a lot of cases, however, we could have some challenges, especially around our application is coded. Some of the SQL syntaxes are not supported when we get into the content deeper on. When we get to cloud SQL, we’ll talk a lot more about our database structure, our service calls, instances and all that. So we’ll get more into that when we get down to that point in the course. But be aware that perhaps SQL might be more portable than perhaps if we are on cloud spanner.

Now if we’re already on Cloud spanner, this is a proprietary based format now it supports SQL, but this is actually Google Cloud. It isn’t open source, at least at the time of writing, and we have to be aware of that. If we’re on Google Cloud with Cloud spanner and we have our applications basically designed to deal with cloud spanner, then we’re not going to be able to easily port that off in a lot of cases. Another area that we might want to think about as well is our virtual machines. For example, AWS supports a lot more different formats than Google, and if we’re using VM decay, then we should be fine. If we’re using a VSA format or something that’s not supported by Google, then we’ll need to convert it over to a supporter format. Or we need to look at perhaps re Architecting or designing another approach, but anyways or we could use a third party tool as well, possibly. But with that said, we have to look at formats for our virtual machines and also containers as well.

So it’s one thing if we’re using docker containers, it’s another thing if we’re using another format of containers. So we need to think about these little details before we start migrating and considering porting over to Google Cloud, if we’re on open source now, it certainly is going to make a life easier. If we’re using, for example, a proprietary format in AWS, then we may have some challenges to consider. With that said, it’s one thing if it’s open source versus vendor specific. Now AWS most of their platform is pretty much vendor specific. It’s not open source. Google has traditionally been open source from day one, and again, it’s just a different culture in that respect. So just be aware, these are just some of the things we may want to think about. Well, let’s go ahead and move on to the next section.

5. Evaluating System Considerations

Evaluating system Considerations now, there’s a good amount of considerations we want to consider. For example, let’s take into account do we use containers or do we not use containers? If we do use containers on Google Cloud, we have a couple of choices on how we deploy those containers. We could also deploy containers on Apps, App Engine, Kubernetes Engine, and if we want, we could deploy Compute Engine and then deploy container service on top of that. Now why would we want to do that is beyond me in most cases. But just be aware that we could deploy containers in several factors. So when we’re considering App Engine, we want to look at both sides. Do we feel good that we’re going to handle scalability appropriately or not? One of the things to consider too is do we integrate with On Prem services? Are we looking down the road maybe to integrate with Anthos or Kubernetes on Prem? Therefore, we might want to stay with Kubernetes Engine.

A lot of considerations to consider. And then another example is if we’re deploying pipelines in Google Cloud, we may want to look at what we’re using on Prem, for example. And if we’re using Kubernetes Engine, we may want to look at complementary services to help facilitate creating a pipeline somewhat easier, integrated better as well. And then if we’re just considering, for example, considerations for any system, especially if we’re looking at going to Google Cloud, we want to look at cloud security integration, enterprise networking. For example, we want to consider not only some of the tasks we have to do, for example, managing our Google Identities.

GCP generally uses what’s called Google Accounts for authentication, access management. For example, developers may have Google Accounts already. Do we want to allow them to use GCP with those accounts or create new accounts? If we’re using Cloud Identity or G suite, that brings up other areas of cloud security. May want to consider some of the Google services around security. We may want to look at a Cloud Armor Identity Aware, proxy encryption, firewalls traffic, external traffic, I really should say Nat, for example, episode Group Service Accounts Integration. We want to consider what we’re integrating with. Is it on prem? Is it another cloud provider? The best way to integrate in most cases an application is through the use of containers. It just provides that portability and integration in most cases. I know that’s not always possible. Things we want to consider networking is going to be a big deal as well. We have to consider how we’re connecting to Google Cloud. Is it hybrid connectivity? We’re going to use like Cloud VPN or Cloud Interconnect. We use in a VPC.

How many VPCs? Do we have projects set up as well? We also want to look at performance requirements, transactions per second, maybe required latency requirements. What about the stack reusing? Any overhead to deal with database requirements? Again, the list could be endless and some requirements, right? Another thing to think about too is availability as well. Governance is another area that can definitely cause some headaches. For example, GDPR, socks, KYC, AML, you name it. It could be one of your challenges, right? Cloud structure, for example, how are we structuring our services? Are we creating pipelines, many different projects?

Or is everything in one project? Do we tie in G suite or a cloud identity? What about the development we’re working on? How are we handling development? Development stack, programming languages, used IDE environments, APIs, just some of the things around development we may want to consider. Also too, we may need to be concerned about other areas that are going to be a result of the system or the platform that we use could be billing, costing, compliance, return on investment, TCO we may want to look at. Then lastly, cloud migration concerns could be another area we want to plan for as well. And again, this is not an exclusive list, but most of the common ones. If you check out Google’s Best Practices, the Enterprise Best Practices, it covers a lot of this already.

And again, just be aware that if there’s one thing we want to study around considerations is going to be around Enterprise Best Practices. And this would be this webpage right here that you’d want to take a look at. And this would be the best practices for enterprise organizations. Webpage this is just one of several pages you may want to look at. You will find out through the course that we will be referencing this page significantly. And in reality, if you take any Google exam, there is going to be quite a bit of reference to this one page called the Enterprise Best Practices or Best Practices I should say, for enterprise organizations. Once again, this is going to be a well referenced page throughout the course, but also on the exam. Anytime you see on Google’s exam objectives, best Practices, this is what they’re referring to. In most cases, there are other smaller Best Practices page, like for cloud storage, et cetera. But mainly this is where we want to start. When it comes to Best practices, let’s go ahead and move on.

6. Operating system versions and base runtimes of services

Operating system versions. Now, when we’re deploying our services, we’re likely deploying them on a VM, or if we’re deploying these services on a container, we want to be aware of course of our VM or images and the options that we would deploy. So for example, simple if we’re going to deploy Red Hat Linux, then we want to be aware of what is supported on the platform, but also is that available, for example, in a GCP template or marketplace template, or do we need to deploy this with a custom template? And again, we want to know the options that are supported as well. For example, we would need to consider in Windows different security measures than what we would want to consider in deploying an Ubantu image. And it really just comes down to understanding the operating systems and the options that are available, how we would deploy them. Just be aware too that if we want to deploy, for example, SQL, there’s also specific templates for that. We may want to deploy a marketplace solution also too, it’s very common now for development teams to develop more in the cloud and use services in the cloud. So for example, you may want to use Jenkins, for example, on Google Cloud, and you could deploy that on a virtual machine on whatever image that’s supported, for example on Linux in most cases. You’re not going to deploy that on Windows, at least in Google Cloud, at least at the time of writing. And I don’t know why you would, but just be aware that there could be some great options available. Also too, when it comes to options, the marketplace has some additional capabilities that are not available, like Compute Engine by default. So check out what is available. That’s pretty much above all for this objective. So let’s move on to Service locality. You.

7. Service Locality

Service locality. Now, when we think about locality, we want to think about a couple of factors. First of all, we want to understand where our user base is going to be. And this may not be the easiest question to answer immediately. In some cases it could be that this is going to be a mobile app or this is going to be a new web service. There’s no real history of it and it’s more of a start up, who knows? But anyways, trying to guesstimate where your user base is will absolutely help you determine what is the best region and zone to deploy your application on and your services. So for example, if we deploy App Engine, we need to be aware of what are the regions and zones that App Engine is deployed in. Pretty much every region and zone is supported with App Engine. There’s two regions, I should say that at least at the time of writing, it is not supported. So you just want to double check what is available at the time of your deployment. Now, App Engine is a regional service and we don’t want to deploy App Engine one month and then try to move it the next month.

Because App Engine is regional, it doesn’t move. So what does that mean? We have to deploy another instance in another region. So we want to pick and choose our regions with App Engine appropriately. And then what about other services such as cloud spanner? Well, cloud spanner has two different deployment approaches and we can deploy that regionally or multiregional.

We need to look at how we deploy cloud spanner, for example. But also too, it’s not just data services or anything like that. What about our compute services? What about our cloud storage? Another thing to think about too is some of the services actually are deployed as more of a contentbased approach. Like cloud storage is cached. An App Engine is also cached, I should say at a lot of the what I would call basically pure nodes. Or another way to look at it is an edge location similar to like a CDN in a lot of cases. So we want to be aware of our deployment strategy. So let’s go to a whiteboard and talk more about this.

8. Whiteboard Service Locality

Let’s talk about Service locality briefly and some things you may want to think about before you go out and take the exam, just to make sure that you can grasp the simple concepts with this. Again, it’s simple. It’s just sometimes when you’re reading some of the questions, they may not actually strike you that they’re informed storing something in the question. And what I’m trying to get out of here is when you read the questions, you need to think about not only the Google cloud region and zone, but your user base distribution. So for example, if we have a mobile environment, we need to really look at that differently than if all our users are located on premises. And in this case, let’s say for example, our users are located on premises, which might be, I don’t know, let’s say Boston. Boston, Massachusetts, North America for those that are not familiar with where Boston is.

So all our users in our call center, let’s say, might be on Prem, then that makes our service locality challenge a lot easier because we know where our users are. However, let’s say for example, we’re not deploying a call center app. We’re deploying basically something like a mobile app or we have a remote workforce, whatever that situation is. So our users may be in North America, in South America, in Asia, wherever they may be. So if we’re deploying like App Engine, we need to consider first of all, our developers. Where are they located? If our developers are all in the EU, then we want to deploy App Engine in the EU. On the other hand, if our compute services for production are serving applications for the US. And Eastern, let’s say European Union, then we know that we need to deploy services, for example, in the EU. And we may want to consider what region in the EU we choose. Do we choose Finland or do we not? In the US. We need to consider do we choose Iowa, Oregon or South Carolina, for example. And again, we need to know where the user base would likely be. Another area that we want to consider as well is essentially focused on CDN and our edge. Now, from a networking perspective, we’re going to talk a lot more about networking throughout the course. But we want to pay attention to to the ability for having our services cache like App Engine. Cloud storage is a good example. So if we have users that we know we’re near an edge that has App Engine cached, what’s nice is with a lot of the Google services that are basically cached, they’re not just statically cached. In reality they’re almost dynamically cached, which is somewhat different.

So that’s a really nice benefit. So we want to consider that as well. And then if we have, for example, a lot of mobile users in Asia, then what do we want to do? We need to consider deploying, for example, let’s say, we’re deploying a mobile environment with App Engine. Chances are with App Engine, we may want to use, for example, Cloud Data Store and we may want to use cloud storage. Let’s say not only do we want to use Data Store for profiles, but maybe we want to use cloud storage for ingesting that data and then pulling it into analytics as well.

So where do we dump that data and where do we pull it in from? Now? Also too, some services are global, some services are pretty much on a global network. For example, like Cloud Pub sub isn’t so much a concern, but we need to be cognizant of how it works. What I’m trying to get out here are three things services in GCP or what? They’re either regional Zonal or they were geo distributed like Cloud spanner. Now, another thing that we already know, right, for example is Cloud SQL. This is what a regional service. So when we deploy or the service we want to be cognizant of where the services? So before we take the exam, try to know the basics about how services are deployed, are they regional zonal, etc. But also to how would you address the user base. So let’s go ahead and move on to the test tips and see how we do.

9. Locality Test Tips

Let’s talk about some test tips. Now when we go into this exam, we want to be aware that there’s going to be some questions that will address, for example how do you and this is really true not just for developer exam but also for the cloud architect and the engineer exam as well. So when we go into a lot of the Google exams, we want to realize that the key study questions, or just even the standard questions that they have that are not key study based, are going to give you a scenario, and you’ll need to figure out, how do we solve the problem or what will we do to reduce latency? And sometimes what they do too is just be aware of that.

They may not say directly reduce latency, but instead they might say joe, the application is not performing well and the customer is upset, how do you solve the problem? So you know that it’s a latency issue unless they say it’s a UX issue in most cases. So you may have to go and look at using Debug or Trace, look at log files, whatever the issue is. But if you’re deploying an application in Google Cloud, you need to be aware of course of locality SQL is a regional service, whereas Cloud spanner is more multiregional. It would make sense too that you deploy services such as Cloud Storage, Compute Engine, even Cloud SQL, App Engine, whatever closer to the user base if you can. I know that’s not always the optimal way to approach things, especially in a mobile world, but we just need to be aware that that would make sense in most cases.

App Engine is a regional service. So when we’re deploying App Engine it is regional, it is not multiregional, we can’t move it, it’s not particularly something we can migrate. So what do we have to do? We have to deploy another instance of App Engine also too, Cloud Storage is one service of many, for example, that is essentially cash at edge locations. So for example, we might want to be aware of where they’re customer is and if you can identify edge locations with Google that cloud storage would be basically cashed at, then that may help your use case for deploying for example, specific services as well. So that’s the test tip. Let’s move on to the next subject.

10. Microservices

Evaluating Microservices Architecture well, you can certainly be assured you’ll likely see something on Microservices on the exam, matter of fact, probably a few things. But what we want to focus on for this objective is to understand what a microservice is, why we want to use use a microservice, and also try to understand some of the benefits as well. So we’ll go ahead and talk about microservices now. We’ll go ahead over to a whiteboard after this and then there will be some test tips on microservices as well. The first thing about Microservices is that generally if we’re using a legacy app and we want to go to Microservices, there’s a lot of work we want to do. However, the benefits are pretty numerous. For example, maintainability isolation of faults, portability, you name it. But Microservices is really an architecture and if we go from legacy to Microservices, we’re changing our architecture for application. And if we think about it from a design perspective, it’s really a different way of thinking. Because generally applications 20 years ago, for example, were designed to be a box. They were all in the box. And now Microservices are not designed to be a box. They’re really designed to be more of numerous boxes.

We’re going to create and tie those boxes together through essentially an architecture that is going to allow us to provide flexibility, integration, performance, isolation, et cetera. But with that said, we want to know first of all what microservices are. And the next thing we want to do is realize that there are some benefits of microservices as well. For example, they’re simple to deploy and understand. In most cases they’re reusable faster isolation. If there’s an issue in the legacy app, traditionally, if we had a query service, for example, that query service was tied not only to the user application, but it was also tied to the database. And all the plugs and coding in the background had everything pretty much as one application in the old days. Now, when we develop our applications, each individual service, for example, is going to have its own little microservice. Instead of putting together 20 different services in one large service, we’re just breaking them up into 20 mini services or microservices. And this also reduces risk as well. Let’s go to a whiteboard and talk about Microservices, specifically around Google Cloud services. We may want want to consider.

11. Whiteboard- Microservices

Let’s talk about microservices on Google Cloud. Now, there’s of course several different ways we could deploy microservices. Generally, we’re going to want to consider cloud functions if it’s only a few things we’re going to do. If we’re looking to put together an application that is fully developed as a microservices architecture, then we want a look at Kubernetes Engine or App Engine. Now, in this example, what I want to do is just talk about, for example, Kubernetes engine. We have, for example, here an application in this application could be, for example, anything around a user based application that is going to deploy, for example, with docker containers.

In this case, here, perhaps we have a data service, the data service maybe going out there looking for products that are available. This might be a mobile app, whatever it is. This may be an image service that we would deploy for a store ecommerce site, for example, as well, and then a quote service or pricing service as it might be known as well. And this application is load balanced as well. Of course, you wouldn’t want to deploy an application on any cloud service without load balancing if it’s for production, of course. So we would have, for example, just in this case, four different services running on essentially four different containers at a minimum.

Now, if we’re looking at Kubernetes, for example, we may want to consider packaging this up, of course, as docker containers and then deploy these as a pod. But part of this too, we want to look at, for example, a few things. Let me get a different color. We may want to look at a couple of things. First of all, let’s say, for example, scaling. How are we going to scale this? For example, we could scale this horizontally with what’s called the horizontal pod scaler. And I’m not going to write it down, I’m going to text it in. So for example, we may want to consider and also I’m going to talk more about containers throughout the course. This is just one little snippet here that I’m going to talk about. So horizontal pods, for example, the auto scalar.

Another thing we may want to think about looking at is of course, the cluster auto scaler as well. And then also too, we might want to look at, for example, our zones. For example, we’re deploying in. We could also deploy what’s called an MZ multi zone as well. And again, these are just some of the things we want to think about with deploying Kubernetes. And again, this is just for Kubernetes. Now, what about if this was App Engine? Well, again, with App Engine, we of course have options that we could deploy as well. For example, with App Engine, if we’re going to deploy the same service, we would want to deploy this, of course, on flex and let me change my color here.

So instead of Kubernetes, we may want to deploy this on and I’m just going to go Ae for App Engine, just to make it short and sweet. So with App Engine, we may want to look at our runtime environment. We may want to create a custom runtime environment. We may want to integrate with like cloud data store, create service accounts, other things of that nature. But we definitely want to use flexible environment because again, that’s for containers. Also too, when we’re thinking about App Engine, we need to also pay attention to the fact it’s a regional service. So just a few thoughts on we could go with micro services. We’re going to be talking a lot more about Kubernetes and App Engine throughout the course. So let’s move on to the next subject.

12. Testips – Microservices

Let’s talk about micro services now. In Google Cloud, we want to be aware of not only what a microservice is, why we want to use a microservice, what are the benefits, but we want to know Google Cloud Services for using microservices on GCP. Want to go into the exam being able to distinguish between using cloud functions versus App engine, for example.

Of course, there’s many benefits to each service. The question is, is what you want and what you want to get out of it? Again, we previously discussed a lot of this in the whiteboard. So for time purposes, we’re going to go ahead and move on. Let’s go ahead, talk about defining key structures.

13. Defining a key structure for high write applications using Cloud Storage, Cloud

Defining a key structure for HIGHRIDE applications using cloud storage cloud BigTable Cloud Spanner or Cloud SQL. Now, when it comes to defining our key structures, we generally want to be aware of that. There’s certainly going to be some best practices and we want to be aware of how to to avoid hotspots. One of the best practices is around hashing the key and then storing it in a column. Another thing that we could do as well is to swap the order of the columns in the primary key and also note a Uuid as well. We want to use version four. This is recommended because it’s not only secure, but it also uses a random value as well and we could also bit reverse sequential values as well. Now, one thing to note too is around a Uuid. This is something we’re going to basically want to reference in an application. It’s going to provide us, of course, a unique number. Identifier could also be referencing someone’s computer, laptop, mobile device, whatever. As far as key structures.

Now specifically around cloud spanner, there’s probably a little bit more than what you need to know for this part of the module here, but I just wanted to make sure that we are at least aware of cloud spanner. We will go through cloud spanner more throughout the course, but for this specific objective, we want to be aware that this is a strongly typed database and it is a relational structured database. In that sense, we can define our tables that are structured rows, columns and values, of course, and they can contain primary keys. One of the things to pay attention to on the exam is to understand the main difference between using cloud spanner and Cloud SQL.

For example, one is of course strongly typed and one is more of a relational database that we’re probably using every day. Now, Cloud SQL, we want to follow the rules as well that are related to our schema object names and then our character sets as well. So we want to pay attention to the character sets when we’re using Cloud SQL. It supports basically the same object names and character sets overall in Cloud SQL. So we want to at least start by following these rules on these web pages. Now, Big Table is of course a different animal when it comes to a relational database because it really isn’t a relational database, it’s really meant to provide a data structure, but it’s not a relational database, so it is very different. So just be aware of that. And the main things I highlighted that you may want to remember is that instead of like in SQL, for example, the way rows and columns are read, big Table reads rows atomically. And basically we need to limit the amount of data that you store in a row. So the longer the row, the slower the performance we’re going to get. And that’s something that takes a little bit of getting used to, to think of, but we need to be aware of that. Now, the best practice is highlighted here.

Again, anytime we see best practice, we know that Google will likely want to talk about it on the exam. So be aware that we don’t want to store more than ten meg in a single cell and 100 meg in a single row. Now, when it comes to performance, we want to use basically the row key. This is what big table queries use. And again, this is a range to retrieve the data as well. Now also when it comes to, for example, big table, we may want to dynamically rebalance, for example, the key space. And we can do that by going to a nice utility, for example, that would help us with that. And this is called the key visualizer. Now, there is a demo on this.

Further on in the course, I’ll walk you through how this works. But basically what this does is it generates, that is hourly and daily scans for every table. In your instance, assuming that it meets one of the two conditions, I won’t read those to you, but just go ahead, review this document down here because it’s very important that you do understand this because again, you’ll likely get a performance related question on the exam for big table or BigQuery. So we need to understand how to deal with performance issues. For example, how do we rebalance, how do we adjust to, for example, what’s called basically a hot spot and we need to be aware of that. And also too, for example, if we do observe a hot spot, we could change a key, of course, but we need to pay attention to a couple of things as well. And some of those things that we really want to consider when we’re considering changing the key would be focused mainly around, for example, the number of nodes, for example, the size of the rows, the right volume, the scaling that we’re looking at. Another thing too, we could consider salting. Salting is an approach that we could use, for example, that allows us to avoid hotspots. It’s a way that we basically time series our schema, essentially. And there’s actually a good document on that if you go and look at I think it’s under Big Table and Schema time series is what it’s called. But again, not so much a focus for this course. With that said, let’s move on to the test tips.

14. Defining a key structure Test Tips

The main test tips are we want to be aware of hashing the key across the load balancer will certainly provide some performance benefits. Another thing to be aware of is what the UUID is and why it’s important. It is a 128 bit number as well.

Cloud spanner for example, is strongly typed and understand why that would be important. And also too, the big table key visualizer has a heat map and this can be used to identify hotspots as well. Let’s move on.

15. Session management

Session management. Let’s talk about session management in regards to this section of the exam objectives. The main focus is going to be on two areas, cloud identity aware proxy and cloud spanner. Let’s talk about cloud identity, cloud identity aware proxy. The first thing to be able aware of when it comes to cloud IAP sessions is that they’re tied to the underlying Google login session essentially with the IAP flow which is down the bottom here. Basically what’s going to happen is we’re going to have to of course, establish a session and basically once that started, the user receives what’s called a session cookie and the session cookie is essentially going to be signed by the Google account service.

Now with cloud IAP, there’s a lot of little details on this, but the main focus to realize for this exam is that it uses a cookie and that there’s a 1 hour expiration time. And we want to know for example, what the process would be if the cookie expired. One of the things that caught me by surprise, and again, sometimes you may do a lot of work in this area, sometimes you may not. But one error that caught me off guard was understanding the process between how IAP handles Ajax and non Ajax essentially state changes or session based flows. With that said, the situations that would require the user to sign back in would be as follows. The user signed out of the account, the account was suspended, or the account requires a password reset. So for example, if the user account is signed out, then there’s a state change. And because there’s a state change, cloud IAP identity aware proxy is going to basically invalidate that specific session. And so therefore, we have to go ahead and re initialize that session. And so again, the workflow for that is what would be back here to reinitialize essentially get another cookie. Once the cookie signed, it is then validated again.

Now before the exam, you’re going to want to go to this link here and understand how sessions are actually handled. So definitely understand how cloud IAP handles request, essentially, how the expired session is handled essentially, and how an Ajax application and requests are essentially going to be handled. The difference again is, does it get redirected to Google open authentication or does it get redirected to a 401 code? Now we’re not done yet with cloud IAP, there’s a few things we want to know as well. Left another thing too to memorize is understand how a 401 response is handled. We need to realize that if that response comes back as invalid of 401 error, then basically that JWT token is going to do what it’s going to be an expired session response.

Therefore, we need to update our app code to handle the error. Once we do that, we have to provide what’s called a refresh link. And then we have to refresh the window or close the window and then start over. Here’s an example of the code. If you go back to this page here, all that information on how to do that is there. Just be aware the next area of focus is to be focused mainly on cloud spanner. Now, from an objective point of view, we just want to be aware that cloud spanner is of course, a distributed database solution in Google Cloud proprietary. But the main thing to really realize is that sessions are used to track user state. It is stateful. And the other main focus is to go over to this link for cloud spanner and review the session best practices. I didn’t see anything on the exam about cloud spanner, but I want to cover it just in case you ran into cloud spanner on the exam for sessions. With that said, let’s move on to the test tip.

Comments
* The most recent comment are at the top

Interesting posts

Everything ENNA: Cisco’s New Network Assurance Specialist Certification

The landscape of networking is constantly evolving, driven by rapid technological advancements and growing business demands. For IT professionals, staying ahead in this dynamic environment requires an ongoing commitment to developing and refining their skills. Recognizing the critical need for specialized expertise in network assurance, Cisco has introduced the Cisco Enterprise Network Assurance (ENNA) v1.0… Read More »

Best Networking Certifications to Earn in 2024

The internet is a wondrous invention that connects us to information and entertainment at lightning speed, except when it doesn’t. Honestly, grappling with network slowdowns and untangling those troubleshooting puzzles can drive just about anyone to the brink of frustration. But what if you could become the master of your own digital destiny? Enter the… Read More »

Navigating Vendor-Neutral vs Vendor-Specific Certifications: In-depth Analysis Of The Pros And Cons, With Guidance On Choosing The Right Type For Your Career Goals

Hey, tech folks! Today, we’re slicing through the fog around a classic dilemma in the IT certification world: vendor-neutral vs vendor-specific certifications. Whether you’re a fresh-faced newbie or a seasoned geek, picking the right cert can feel like trying to choose your favorite ice cream flavor at a new parlor – exciting but kinda overwhelming.… Read More »

Achieving Your ISO Certification Made Simple

So, you’ve decided to step up your game and snag that ISO certification, huh? Good on you! Whether it’s to polish your company’s reputation, meet supplier requirements, or enhance operational efficiency, getting ISO certified is like telling the world, “Hey, we really know what we’re doing!” But, like with any worthwhile endeavor, the road to… Read More »

What is Replacing Microsoft MCSA Certification?

Hey there! If you’ve been around the IT block for a while, you might fondly remember when bagging a Microsoft Certified Solutions Associate (MCSA) certification was almost a rite of passage for IT pros. This badge of honor was crucial for those who wanted to master Microsoft platforms and prove their mettle in a competitive… Read More »

5 Easiest Ways to Get CRISC Certification

CRISC Certification – Steps to Triumph Are you ready to stand out in the ever-evolving fields of risk management and information security? Achieving a Certified in Risk and Information Systems Control (CRISC) certification is more than just adding a prestigious title next to your name — it’s a powerful statement about your expertise in safeguarding… Read More »

img