Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage

Intro

 
This article's intention is to explain the main skills measured in this sub-topic of the AZ-204 Certification. Cosmos DB is the main component that will have their fundamentals explained here alongside a practical example.
 
This certification is very extensive and this article approaches only the main topics, make sure you know  those components in depth before taking the exam. Another great tip is doing exam simulators before the official exam in order to validate your knowledge.
 

What is the Certification AZ-204 - Developing Solutions for Microsoft Azure?

 
The AZ-204 - Developing Solutions for Microsoft Azure certification measures designing, building, testing, and maintaining skills of an application and/or service in the Microsoft Azure Cloud environment. It approaches, among others, those components,
  • Azure Virtual Machines;
  • Docker;
  • Azure Containers;
  • Service Web App;
  • Azure Functions;
  • Cosmos DB;
  • Azure Storage;
  • Azure AD;
  • Azure Key Vault;
  • Azure Managed Identities;
  • Azure Redis Cache;
  • Azure Logic App;
  • Azure Event Grid;
  • Azure Event Hub;
  • Azure Notification Hub;
  • Azure Service Bus;
  • Azure Queue Storage.
 
Target Audience
 
Any IT professional willing to improve his or her knowledge in Microsoft Azure is encouraged to take this certification, it is a great way to measure your skills within trending technologies. But, some groups of professionals are keener to take maximum advantage of it,
  • Azure Developers, with at least 1 year of experience with Microsoft Azure;
  • Experienced Software Developers, looking for an Architect position in a hybrid environment;
  • Software Developers, working to move applications to the cloud environment.
Skills Measured
 
According to today's date, the skills that are measured in the exam are split as follows,
  • Develop Azure compute solutions (25-30%)
    • Implement Azure functions
  • Develop for Azure storage (10-15%)
    • Develop solutions that use Cosmos DB storage
    • Develop solutions that use blob storage
  • Implement Azure security (15-20%)
    • Implement user authentication and authorization
    • Implement secure cloud solutions
  • Monitor, troubleshoot, and optimize Azure solutions (10-15%)
    • Integrate caching and content delivery within solutions
    • Instrument solutions to support monitoring and logging
  • Connect to and consume Azure services and third-party services (25- 30%)
    • Develop an App Service Logic App
    • Implement API Management
    • Develop event-based solutions
    • Develop message-based solutions
Updated skills can be found in the AZ - 204 Official Measured Skills Website.
 

Benefits of Getting Certified

 
The main benefit here is having a worldwide recognized certification that proves that you have knowledge of this topic. Among intrinsic and extrinsic benefits, we have,
  • Higher growth potential, as certifications are a big plus;
  • Discounts and deals in Microsoft products and partners, like PluralSight and UpWork;
  • MCP Newsletters, with trending technologies;
  • Higher exposure on LinkedIn, as recruiters usually search for specific certifications;
  • Higher salary, you will be more valuable to your company;
  • Unique happiness when getting the result and you were approved, knowing that all your efforts were worth it;
 

Main skills Measured by this Topic

 
What is Cosmos DB?
 
Cosmos DB is a NoSQL Database with an incredibly fast response time and string support for scalability. Azure Cosmos DB offers Cosmos DB with a fully managed service, where you do not need to worry about its administration because Azure handles the Cosmos DB automatic management, updates, and patches.
 
Azure Cosmos DB also offers serverless cost-effective capacity management and automatic scalability options and its main benefits are the ones as follows,
  • Integrates with many Azure Services, like Azure Functions, Azure Kubernetes Services, Azure App Services;
  • Integrates with many databases APIs, like the native Core SQL, MongoDB, Cassandra, Gremlin;
  • Integrates with many development SDKs, like .Net, Java, Python, Node.Js;
  • Has a schema-less service that automatically applies indexes to your data, resulting in fast queries;
  • Guaranteed uptime SLA of 99,999% availability;
  • Automatic data replication, among Azure Regions;
  • Data protected with encryption-at-rest and role-based access;
  • Fully-managed database, with updates, maintenance and patches applied automatically;
  • Autoscale provided in order to attend different sizes of workload;
 
Cosmos DB APIs
 
Azure Cosmos DB is very flexible, being offered through different types of APIs in order to support a wider range of applications to be covered. All those different APIs are supported due to the multi-model approach, being able to deliver data through documents, key-value pairs, wide-columns, or graph-data.
 
It is strongly recommended to use the Core SQL APIs for new projects, whereas it is recommended to use the specific database API for existing databases. Those APIs are the ones as follows, 
  • Core SQL API, default API for using Azure Cosmos DB enables querying your data with a language very close to SQL;
  • MongoDB API, used to communicate with MongoDB databases and storing data as documents;
  • Cassandra API, used to communicate with Cassandra using Cassandra Query Language and storing data as a partitioned row store;
  • Azure Table API, used to communicate with Azure Table Storages and allowing indexes in the partition and row keys. To query data you can use OData, LinQ in code and the Rest APIs for GET operations;
  • Gremlin API, used to provide a Graph-based data view, being queried from graph traversal language.
Partitioning Schemas in Cosmos DB
 
In Azure Cosmos DB indexes the data among partitions, in order to have a better performance they are grouped by the partition keys. To understand better about partitioning schemas in Azure Cosmos DB some basic concepts have to be explained as follows,
  • Partition Keys are the keys used to group items. Works like primary keys;
  • Logical Partitions consists of a set of items sharing the same partition key value;
  • Physical Partitions consists of a set of logical partitions. May have from 1 to any logical partitions and are managed by Azure Cosmos DB;
  • Replica-Sets consists of a group of physical partitions that are materialized as a self-managed and dynamically load-balanced group of replicas spread across multiple fault domains.
The concepts explained above can be better visualized in the image below,
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 

Consistency levels in Cosmos DB

 
Azure Cosmos DB offers 5 types of consistency levels in order to maintain your data availability and querying performance depending on your needs, those consistency levels are the ones as follows,
 
Strong Consistency
  • Reads operation guarantees the return of the most recent data;
  • Reads operation costs as much as the Bounded Staleness and more than session and eventual consistencies;
  • Write operations are only available to be read after the data being replicated by the majority of its replicas;
Bounded Staleness
  • Reads operations are lagged behind writes operation, by time or versions;
  • Reads operation costs as much as the Strong Consistency and more than session and eventual consistencies;
  • Has the strongest consistency than a session consistency, consistent-prefix, or eventual consistency; 
  • Recommended for globally distributed applications, with high availability and low latency;
Session
  • Reads operation guarantee consistency of written data in the same session;
  • Consistency is scoped to a user session, while other users may face dirty data if it has just be written by another session;
  • Default consistency level used to newly created databases;
  • Reads operation costs are smaller than Bounded Staleness and Strong Consistency but bigger than Eventual Consistency;
Consistent Prefix
  • Read operation guarantees the return of the most recent data replicated among the replicas, but it does not guarantee to have the most recent data;
  • Dirty data happens when one replica change the data state but this data has not been replicated yet;
  • Has a stronger consistency level than the Eventual Consistency but less than any others.
Eventual
  • Read operation does not guarantee any consistency level;
  • Weakest consistency level;
  • Lowest latency and best performance among the consistency levels;
  • Reads operation cost less than any other consistency levels;

Cosmos DB Containers

 
Azure Cosmos Containers are useful for Azure Cosmos DB scalability both for storage scalability and throughput scalability. But, Azure Cosmos Containers are also great when you need a different set of configurations among your Azure Cosmos DBs because it offers the capability to configure each container individually. 
 
Azure Cosmos Container has some container-specific properties, and those properties, which can be system-generated or user-configurable, vary according to the used API.
 
Those properties list ranges from unique identifiers for containers to configurations of the purging policies. You may find the entire properties list here.
 
At the creation moment you may configure your throughput strategy between those two modes,
  1. Dedicated mode, whereas the provisioned throughput configured in this container is exclusively for this container and is backed by SLAs;
  2. Shared mode, whereas the provisioned throughput configured in this container is shared among all containers with the shared mode.
Cosmos DB Containers are available, at the present date, for all Cosmos DB APIs, except Gremlin API and Table API.
 

Scaling Cosmos DB

 
Azure Cosmos DB offers manual and automatic scaling, without any interruption on your services nor impact on Azure Cosmos DB SLA.
 
With automatic scaling, Azure Cosmos DB automatically adjusts, up or down, your throughput capacity according to its usage without needing to create any logic nor code.
 
You only need to configure your maximum throughput capacity and Azure will adjust your Azure Cosmos DB throughput from 10% of the maximum capacity to 100% of the maximum capacity.
 
With manual scaling, you can change permanently your throughput capacity. 
 
Keep in mind that it is vital to have chosen your partition keys wisely before scaling your Azure Cosmos DB. Otherwise, your requests are not going to be balanced as your are going to experience a hot partition, which elevates the costs and reduces the performance.
 
There are some important topics that you must configure when setting autoscale, as follows,
  • Time to Live, defines the TTL for your container. Default is off but you can configure it to be on with the TTL time being item-specific or for all items in the container;
  • Geospatial Configuration, this is used to query items based on location;
    • Geography, represents data in a round-earth coordinate system;
    • Geometry, represents data in a flat coordinate system.
  • Partition Key, the partition key used to scale your partition;
  • Indexing Policy, sets how the container applies the indexes to its items. You may include or exclude properties, set the consistency mode, automatically apply the indexes, etc..

Triggers, Stored Procedures, and user-defined functions with Cosmos DB

 
Azure Cosmos DB provides a transactional way to execute code in order to define Triggers, Stored Procedures, and Functions. You can define those Triggers, Stored Procedures, and Functions through the Azure Portal, Javascript Query API for Cosmos DB, or Cosmos DB SQL API client SDKs.
 
Azure Cosmos DB has two types of triggers,
  • Pre-trigger which is executed before the data has changed;
  • Post-trigger which is executed after the data has changed.

Change Feed Notifications with Cosmos DB

 
The Azure Cosmos DB Change Feed Notifications is a service that monitors the changes occurring among all containers and distributes events, triggered by those changes, across multiple consumers.  
 
The Change Feed Notifications can also be scaled-up or scaled-down alongside the Cosmos Db Containers and its main components are the ones as follows,
  • The monitored container, which is the container that when any insert or update is executed the operations are reflected in the change feed;
  • The lease container, whereas it stores the states and coordinates the change feed processor;
  • The host, hosting the change feed processor;
  • The delegate, which is the code executed when triggered by any event in the change feed notifications.
The change feed processor may be hosted among Azure services that support long-running tasks, like Azure WebJobs, Azure Virtual Machines, Azure Kubernetes Services, and Azure .Net hosted services.
 
Practical Sample
 
 
Study of the case: we are going to create a Database schema to represent those classes bellow, do not forget that your Id field must be specified and be a unique string
  1. public class Person  
  2.    {  
  3.        [JsonProperty(PropertyName = "id")]  
  4.        public string Id { getset; }  
  5.        public DateTime BirthDate { getset; }  
  6.        public string Name { getset; }  
  7.        public string LastName { getset; }  
  8.        public Address Address { getset; }  
  9.        public Vehicle Vehicle { getset; }  
  10.    }  
  11.    public class Address  
  12.    {  
  13.        public int Id { getset; }  
  14.        public string City { getset; }  
  15.        public string StreetAndNumber { getset; }  
  16.   
  17.    }  
  18.    public class Vehicle  
  19.    {  
  20.        public int Id { getset; }  
  21.        public int Year { getset; }  
  22.        public string Model { getset; }  
  23.        public string Make { getset; }  
  24.   
  25.    }  

Creating Azure Cosmos DB using Azure Portal

 
In your Azure Portal, search for the Azure Cosmos DB product and then click on Add. Fill the Basics, Networking, Backup policy, Encryption, and Tags forms then create it.
Here we will be using the Core SQL API and naming the Cosmos DB as sampleazurecosmosdb.
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
After Successful deployment, access your Cosmos DB resource in order to get your endpoint URI and the primary key for further usage.
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
Validate your empty Data Explorer
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 

Creating an Azure Cosmos DB Database using C#

 
Requirements
  1. Create a new Console Application
  2. Install Nuget Microsoft.Azure.Cosmos
Call the CreateDatabaseIfNotExistsAsync method from your Cosmos Client to create your database,
  1. class Program  
  2.    {  
  3.        private static readonly string endpointUri = "https://sampleazurecosmosdb.documents.azure.com:443/";  
  4.        private static readonly string primaryKey = "BD43cPOWtjdSsSeBTpy2rbJLIW4lMzhGoNkiVKX6y32cTQ2E2f139J0r8xxS3YR8Sy1bQywls9ByISabRjuaUQ==";  
  5.        public static async Task Main(string[] args)  
  6.        {  
  7.            using (CosmosClient client = new CosmosClient(endpointUri, primaryKey))  
  8.            {  
  9.                DatabaseResponse databaseResponse = await client.CreateDatabaseIfNotExistsAsync("SampleCosmosDB");  
  10.                Database sampleDatabase = databaseResponse.Database;  
  11.   
  12.                await Console.Out.WriteLineAsync($"Database Id:\t{sampleDatabase.Id}");  
  13.            }  
  14.        }  
  15.    }  
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
Validate your Data Explorer through Azure Portal
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage

Creating an Azure Cosmos DB Partitioned Container using C#

 
Set your indexing policy in your container properties, and call the CreateContainerIfNotExistsAsync method from your database object, here we also pass the desired throughput alongside the container properties.
  1. IndexingPolicy indexingPolicy = new IndexingPolicy  
  2. {  
  3.     IndexingMode = IndexingMode.Consistent,  
  4.     Automatic = true,  
  5.     IncludedPaths =  
  6.     {  
  7.         new IncludedPath  
  8.         {  
  9.             Path = "/*"  
  10.         }  
  11.     }  
  12. };  
  13. var containerProperties = new ContainerProperties("Person""/Name")  
  14. {  
  15.     IndexingPolicy = indexingPolicy  
  16. };  
  17. var sampleResponse = await sampleDatabase.CreateContainerIfNotExistsAsync(containerProperties, 10000);  
  18. var customContainer = sampleResponse.Container;  
  19. await Console.Out.WriteLineAsync($"Sample Container Id:\t{customContainer.Id}");  
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 

Validate your Data Explorer through Azure Portal

 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 

Adding data to your Azure Cosmos DB container using C#

 
We will be adding a new Person as follows,
  1. private static Person GetPerson()  
  2.        {  
  3.            return new Person  
  4.            {  
  5.                BirthDate = DateTime.Now.AddYears(30),  
  6.                Id = "10.Thiago",  
  7.                Name = "Thiago",  
  8.                LastName = "Araujo",  
  9.                Vehicle = new Vehicle  
  10.                {  
  11.                    Id = 2,  
  12.                    Make = "Audi",  
  13.                    Model = "TT",  
  14.                    Year = 2020  
  15.                },  
  16.                Address = new Address  
  17.                {  
  18.                    Id = 12,  
  19.                    City = "Lisbon",  
  20.                    StreetAndNumber = "Rua 25 de Abril, 4"  
  21.                }  
  22.            };  
  23.        }  
From your Container, call the CreateItemAsync method and pass the person object alongside its partition key.
  1. var createPersonResponse = await customContainer.CreateItemAsync<Person>(GetPerson(), new PartitionKey(GetPerson().Name));  
  2. await Console.Out.WriteLineAsync($"Created person with Id:\t{createPersonResponse.Resource.Id}. Consuming total of \t{createPersonResponse.RequestCharge} RUs");   
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
Validate your Data Explorer through Azure Portal
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage 

Creating an Azure Cosmos DB Database using Azure CLI

 
Creating an Azure Cosmos DB Partitioned Container using Azure CLI
Setting variables
  1. $resourceGroup = "your resource group"  
  2. $cosmosDBAccount="samplecosmosaccount"  
  3. $databaseName ="sampleclidatabase"  
  4. $containerName ="samplecontainername"  
  5. $partitionKey = "/Name"  
Creating Cosmos DB Account
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
Creating Cosmos DB Database
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
Creating Cosmos DB Container
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
Checking Azure Portal 
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
Scaling Containers 
 
Inside your Azure Cosmos resource, go to Containers and then Scale. Configure your settings and click on Save
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 

Creating a Change feed notification

 
Here I used Cosmos DB Emulator to have the change feed notification working.
 
Create databases and containers
  1. CosmosClient cosmosClient = new CosmosClientBuilder(endpointUri, primaryKey).Build();  
  2.   
  3. Database database = await cosmosClient.CreateDatabaseIfNotExistsAsync(databaseName);  
  4.   
  5. await database.CreateContainerIfNotExistsAsync(new ContainerProperties(sourceContainerName, "/id"));  
  6.   
  7. await database.CreateContainerIfNotExistsAsync(new ContainerProperties(leaseContainerName, "/id"));  
Start Change Feed Processor
  1. Container leaseContainer = cosmosClient.GetContainer(databaseName, leaseContainerName);  
  2.            ChangeFeedProcessor changeFeedProcessor = cosmosClient.GetContainer(databaseName, sourceContainerName)  
  3.                .GetChangeFeedProcessorBuilder<Person>(processorName: "changeFeedSample", HandleChangesAsync)  
  4.                    .WithInstanceName("consoleHost")  
  5.                    .WithLeaseContainer(leaseContainer)  
  6.                    .Build();  
  7.   
  8.            await changeFeedProcessor.StartAsync();  
Track Changes from Source Container
  1. static async Task HandleChangesAsync(IReadOnlyCollection<Person> changes, CancellationToken cancellationToken)  
  2.      {  
  3.          Console.WriteLine("Started handling changes...");  
  4.          foreach (Person item in changes)  
  5.          {  
  6.              Console.WriteLine($"Detected operation for person with id {item.Id}, created at {item.CreationDate}.");  
  7.              // Simulate some asynchronous operation  
  8.              await Task.Delay(10);  
  9.          }  
  10.   
  11.          Console.WriteLine("Finished handling changes.");  
  12.      }  
Create Items in the Source Container
  1. private static async Task GenerateItemsAsync(CosmosClient cosmosClient)  
  2.        {  
  3.            Container sourceContainer = cosmosClient.GetContainer(databaseName, sourceContainerName);  
  4.            while (true)  
  5.            {  
  6.                Console.WriteLine("Enter a number of people to insert in the container or 'exit' to stop:");  
  7.                string command = Console.ReadLine();  
  8.                if ("exit".Equals(command, StringComparison.InvariantCultureIgnoreCase))  
  9.                {  
  10.                    Console.WriteLine();  
  11.                    break;  
  12.                }  
  13.   
  14.                if (int.TryParse(command, out int itemsToInsert))  
  15.                {  
  16.                    Console.WriteLine($"Generating {itemsToInsert} people...");  
  17.                    for (int i = 0; i < itemsToInsert; i++)  
  18.                    {  
  19.                        var person = GetPerson();  
  20.                        await sourceContainer.CreateItemAsync<Person>(person,  
  21.                            new PartitionKey(person.Id));  
  22.                    }  
  23.                }  
  24.            }  
  25.   
  26.        }  
Complete Code
  1. class Program  
  2.     {  
  3.         private static readonly string endpointUri = "https://localhost:8081/";  
  4.         private static readonly string primaryKey = "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==";  
  5.         private static readonly string databaseName = "sampleDatabase";  
  6.         private static readonly string sourceContainerName = "sampleSourceContainer";  
  7.         private static readonly string leaseContainerName = "sampleLeaseContainer";  
  8.         static async Task Main(string[] args)  
  9.         {  
  10.             CosmosClient cosmosClient = new CosmosClientBuilder(endpointUri, primaryKey).Build();  
  11.   
  12.             Database database = await cosmosClient.CreateDatabaseIfNotExistsAsync(databaseName);  
  13.   
  14.             await database.CreateContainerIfNotExistsAsync(new ContainerProperties(sourceContainerName, "/id"));  
  15.   
  16.             await database.CreateContainerIfNotExistsAsync(new ContainerProperties(leaseContainerName, "/id"));  
  17.   
  18.             ChangeFeedProcessor processor = await StartChangeFeedProcessorAsync(cosmosClient);  
  19.   
  20.             await GenerateItemsAsync(cosmosClient);  
  21.         }  
  22.   
  23.         private static async Task<ChangeFeedProcessor> StartChangeFeedProcessorAsync(  
  24.     CosmosClient cosmosClient)  
  25.         {  
  26.   
  27.             Container leaseContainer = cosmosClient.GetContainer(databaseName, leaseContainerName);  
  28.             ChangeFeedProcessor changeFeedProcessor = cosmosClient.GetContainer(databaseName, sourceContainerName)  
  29.                 .GetChangeFeedProcessorBuilder<Person>(processorName: "changeFeedSample", HandleChangesAsync)  
  30.                     .WithInstanceName("consoleHost")  
  31.                     .WithLeaseContainer(leaseContainer)  
  32.                     .Build();  
  33.   
  34.             Console.WriteLine("Starting Change Feed Processor...");  
  35.             await changeFeedProcessor.StartAsync();  
  36.             Console.WriteLine("Change Feed Processor started.");  
  37.             return changeFeedProcessor;  
  38.         }  
  39.         static async Task HandleChangesAsync(IReadOnlyCollection<Person> changes, CancellationToken cancellationToken)  
  40.         {  
  41.             Console.WriteLine("Started handling changes...");  
  42.             foreach (Person item in changes)  
  43.             {  
  44.                 Console.WriteLine($"Detected operation for person with id {item.Id}, created at {item.CreationDate}.");  
  45.                 // Simulate some asynchronous operation  
  46.                 await Task.Delay(10);  
  47.             }  
  48.   
  49.             Console.WriteLine("Finished handling changes.");  
  50.         }  
  51.   
  52.         private static async Task GenerateItemsAsync(CosmosClient cosmosClient)  
  53.         {  
  54.             Container sourceContainer = cosmosClient.GetContainer(databaseName, sourceContainerName);  
  55.             while (true)  
  56.             {  
  57.                 Console.WriteLine("Enter a number of people to insert in the container or 'exit' to stop:");  
  58.                 string command = Console.ReadLine();  
  59.                 if ("exit".Equals(command, StringComparison.InvariantCultureIgnoreCase))  
  60.                 {  
  61.                     Console.WriteLine();  
  62.                     break;  
  63.                 }  
  64.   
  65.                 if (int.TryParse(command, out int itemsToInsert))  
  66.                 {  
  67.                     Console.WriteLine($"Generating {itemsToInsert} people...");  
  68.                     for (int i = 0; i < itemsToInsert; i++)  
  69.                     {  
  70.                         var person = GetPerson();  
  71.                         await sourceContainer.CreateItemAsync<Person>(person,  
  72.                             new PartitionKey(person.Id));  
  73.                     }  
  74.                 }  
  75.             }  
  76.   
  77.         }  
  78.         private static Person GetPerson()  
  79.         {  
  80.             Random random = new Random();  
  81.             return new Person  
  82.             {  
  83.                 BirthDate = DateTime.Now.AddYears(30),  
  84.                 Id = random.Next() + "Thiago",  
  85.                 Name = "Thiago",  
  86.                 LastName = "Araujo",  
  87.                 CreationDate = DateTime.Now,  
  88.                 Vehicle = new Vehicle  
  89.                 {  
  90.                     Id = random.Next(),  
  91.                     Make = "Audi",  
  92.                     Model = "TT",  
  93.                     Year = random.Next()  
  94.                 },  
  95.                 Address = new Address  
  96.                 {  
  97.                     Id = random.Next(),  
  98.                     City = "Lisbon",  
  99.                     StreetAndNumber = "Rua 25 de Abril, 4"  
  100.                 }  
  101.             };  
  102.         }  
  103.     }  
  104.     public class Person  
  105.     {  
  106.         [JsonProperty(PropertyName = "id")]  
  107.         public string Id { getset; }  
  108.         public DateTime BirthDate { getset; }  
  109.         public string Name { getset; }  
  110.         public string LastName { getset; }  
  111.         public Address Address { getset; }  
  112.         public Vehicle Vehicle { getset; }  
  113.         public DateTime CreationDate { getset; }  
  114.     }  
  115.     public class Address  
  116.     {  
  117.         public int Id { getset; }  
  118.         public string City { getset; }  
  119.         public string StreetAndNumber { getset; }  
  120.   
  121.     }  
  122.     public class Vehicle  
  123.     {  
  124.         public int Id { getset; }  
  125.         public int Year { getset; }  
  126.         public string Model { getset; }  
  127.         public string Make { getset; }  
  128.   
  129.     }  
Result
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 
Creating Stored Procedures
  1. // SAMPLE STORED PROCEDURE  
  2. function sample(prefix) {  
  3.     var collection = getContext().getCollection();  
  4.   
  5.     // Query documents and take 1st item.  
  6.     var isAccepted = collection.queryDocuments(  
  7.         collection.getSelfLink(),  
  8.         'SELECT * FROM root r',  
  9.     function (err, feed, options) {  
  10.         if (err) throw err;  
  11.   
  12.         // Check the feed and if empty, set the body to 'no docs found',   
  13.         // else take 1st element from feed  
  14.         if (!feed || !feed.length) {  
  15.             var response = getContext().getResponse();  
  16.             response.setBody('no docs found');  
  17.         }  
  18.         else {  
  19.             var response = getContext().getResponse();  
  20.             var body = { prefix: prefix, feed: feed[0] };  
  21.             response.setBody(JSON.stringify(body));  
  22.         }  
  23.     });  
  24.   
  25.     if (!isAccepted) throw new Error('The query was not accepted by the server.');  
  26. }  
Executing
 
Road To AZ-204 - Developing Solutions That Use Cosmos DB Storage
 

Creating Triggers

 
Pre-Trigger
  1. function validateItemTimestamp() {  
  2.     var context = getContext();  
  3.     var request = context.getRequest();  
  4.   
  5.     // item to be created in the current operation  
  6.     var itemToCreate = request.getBody();  
  7.   
  8.     // validate properties  
  9.     if (!("triggerTime" in itemToCreate)) {  
  10.         var ts = new Date();  
  11.         itemToCreate["triggerTime"] = ts.getTime();  
  12.     }  
  13.   
  14.     // update the item that will be created  
  15.     request.setBody(itemToCreate);  
  16. }  
Post-Trigger 
  1. function updateMetadata() {  
  2. var context = getContext();  
  3. var container = context.getCollection();  
  4. var response = context.getResponse();  
  5.   
  6. // item that was created  
  7. var createdItem = response.getBody();  
  8.   
  9. // query for metadata document  
  10. var filterQuery = 'SELECT * FROM root r WHERE r.id = "_metadata"';  
  11. var accept = container.queryDocuments(container.getSelfLink(), filterQuery,  
  12.     updateMetadataCallback);  
  13. if(!accept) throw "Unable to update metadata, abort";  
  14. }  
  15. function updateMetadataCallback(err, items, responseOptions) {  
  16.     if(err) throw new Error("Error" + err.message);  
  17.         if(items.length != 1) throw 'Unable to find metadata document';  
  18.   
  19.         var metadataItem = items[0];  
  20.   
  21.         // update metadata  
  22.         metadataItem.createdItems += 1;  
  23.         metadataItem.createdNames += " Post trigger";  
  24.         var accept = container.replaceDocument(metadataItem._self,  
  25.             metadataItem, function(err, itemReplaced) {  
  26.                     if(err) throw "Unable to update metadata, abort";  
  27.             });  
  28.         if(!accept) throw "Unable to update metadata, abort";  
  29.         return;  
  30. }  
External References


Similar Articles