Making A Simple Recommender System - Azure Cosmos DB And Apache Tinkerpop

Graph systems are one of the most ubiquitous models found in almost any natural and man-made structures we see everyday. Since computer science evolved around what we see and devise or deduce, graphs do play a huge role in day to day computation and programming techniques. Its birth dates back to even before it was widely adopted in mathematics since mechanical computing was graph driven implicitly. If only we could get Alan Turing to talk over this.

Before I start, a big inspiration behind this write-up definitely goes to the work of Marko.A.Rodriguez. I was lucky to stumble upon his work while frolicking over the web and his works on graph computing systems and tinkerpop/gremlin is seriously inspiring.
 
What we are going to do today

Let's go right to what we want to do today. Our component of interest is the Graph API of Azure Cosmos Db along with Apache Tinkerpop. Azure Cosmos Db has a Graph API that allows us to store data as a graph or a network. In simple words, instead of storing data in a tabular format where you have row and column, you can store data as they describe each other in terms of relations. If the text here is not really doing it for you, let's jump into some examples.

Let's have a look into this sample graph I made from HIMYM. The big ellipses and boxes here are called vertices and the lines that connect them are called edges. Since we have a little arrow head telling the directions of them, this is actually a Directed Graph. If we look at the data behind the graph, then you can see, we can represent this in two ways. In a table, you can list up the vertices in one table and the edges in other. The edges table might look like, 

FromToType
TedBarney
friend
TedUmbrella found
BarneyRobin wife
RobinBarney husband
TracyUmbrella lost
TedTracy lost
..........

I didn't add all the rows of course and if you look at the design you can see it is not properly normalized. That means in understandable terms that we needed a Type table with all the edge types so we don't write a Type twice. But that is not in this article's scope. So, let's ignore that.

The thing here to see is, graph nodes/vertices can be of different types and the edges can mean different relationships. There are vertices that doesn't represent a character from HIMYM, like the Umbrella and the MacLaren's Pub. Storing these data in a regular database is possible and has been done many times. This way is called the 'implicit' way to store graph like data. Now, one might think, by this rule all data in the world can be represented by a graph, which indeed is true. But we use a graph database only when we see that the data itself focuses more on relations than the data content, such as - friend graph in a social network website or a geographical model of all the offices of a big corporation. The vertices do carry data but the relations are really important here. In these cases, a graph database comes in very handy.
 
Azure Cosmos Db recently came up with a Graph API and it can store graph data natively where the cost to traverse a graph is constant. In a regular database, we have to join multiple tables to formulate these relations properly and in a lot of cases, these computational costs are not constant.
 
Apache Tinkerpop joins the party

I hope by now, we understand why we need a graph database and now is the right time we talk about Apache Tinkerpop. It was previously known as Apache Gremlin. This is essentially a fantastic graph computing engine which allows you to connect to multiple graph databases using a single domain specific language construct. It has language support for most of the popular languages and it is pretty native to groovy and java. Fret not, we have our way to use it over here with C# too.

Before we start, we need to create a simple graph database on Azure. The quickstart is here. You can also opt for java and node.js. I personally used an Azure CosmosDb emulator. The quickstart to install that instead of an actual Azure CosmosDb is here. You can opt for any of these but I have to remind you, at the time this article is being written, Azure CosmosDb Emulator do not support creating graphs and browsing graph data through it's local web portal. You have to use Gremlin Console to connect and talk to the local emulator.
 
Gremlin Console is a command line REPL that you can use to traverse a local gremlin/tinkerpop graph or a remote one. It can connect to any gremlin servers anywhere as long as you have the credentials. You can use the gremlin console to talk to the actual Azure CosmosDb graph database. I suggest using Windows Subsystem For Linux (WSL), better known as "Bash on Ubuntu on Windows" for using gremlin-console. The gremlin console quickstart is here
 
Our sample data set today

We are going to use the movie review data set from grouplens from here . We are using the small dataset of about 100,000 ratings applied to 9,000 movies by 700 users. If you download and unzip the data you will see multiple csv fields. Of which I only used the movies.csv and ratings.csv files. I made a separate users.csv file for the users list. Don't worry, all these are attached with the sample code. The movies csv comes with the movie id, movie name and genres column. The ratings.csv comes with the user id, movie id and a rating column where the user has rated the movie from 0 to 5. The sample code has a simple command line uploader tool that will let you upload the data in your desired gremlin server and graph database connected to it. So, to understand how we talk to gremlin, let's have a peek at the code, shall we? 

Let's talk code

I'm not going to focus on the full source at a time. Let's have a look at how can we connect to an existing Azure CosmosDb Graph database. First, we create a DocumentClient.

  1. DocumentClient client = new DocumentClient(  
  2.  new Uri(endpoint),  
  3.  authKey,  
  4.  new ConnectionPolicy { ConnectionMode = ConnectionMode.Direct, ConnectionProtocol = Protocol.Tcp });  

Definitely the thing that you will notice missing is the authkey variable here. And that is actually the primary key of your graph database that you will find in your azure portal. If you use the emulator they use a fixed key which you will find in the quickstart link I shared above.

Now that we do have a DocumentClient let's upload the sets of movies that we will use for reviews. I'm using the movie name and the movie id for the sake of simplicity here. In our graph, the movie vertices/nodes will have a label 'movie'. This sets the type of the vertex and we will set the name and id as properties of this vertex. The id uniquely distinguish any vertex. So remember, no matter what the label of the vertex is the id has to be unique. Or Azure CosmosDb will let you know that it can not add a vertex because already a vertex of that same key exists. If you don't provide an id property, Azure CosmosDb will generate one and put it on the vertex.

We make sure at the beginning of the upload to drop the existing graph database if there is any.

  1. private async Task NukeCollection(DocumentClient client)  
  2. {  
  3.     try  
  4.     {  
  5.         Console.WriteLine("Nuking...");  
  6.         var response = await client.DeleteDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("graphdb""Movies"));  
  7.         Console.WriteLine(response.StatusCode);  
  8.     }  
  9.     catch (DocumentClientException ex)  
  10.     {  
  11.         Console.WriteLine(ex.Message);  
  12.     }  
  13. }  
The database name I used for these sample is graphdb and the collection name is Movies. Pardon me for my indecency to hard code these but it's a sample code, so I put effectiveness in front of decency.  
 
Adding vertices to our 'Movies' graph

Now, we upload the movies. Of course, make sure we create the database if it's already not there.

  1. private async Task UploadMovies(DocumentClient client)  
  2. {  
  3.     try  
  4.     {  
  5.         Console.WriteLine("Uploading movies");  
  6.         Database database = await client.CreateDatabaseIfNotExistsAsync(new Database { Id = "graphdb" });  
  7.   
  8.         DocumentCollection graph = await client.CreateDocumentCollectionIfNotExistsAsync(  
  9.             UriFactory.CreateDatabaseUri("graphdb"),  
  10.             new DocumentCollection { Id = "Movies" },  
  11.             new RequestOptions { OfferThroughput = 1000 });  
  12.   
  13.         Console.WriteLine("Connected to graph Movies collection");  
  14.   
  15.         Console.WriteLine("Reading movie list");  
  16.         using (TextReader reader = new StreamReader("movies2.csv"))  
  17.         using (CsvReader csv = new CsvReader(reader))  
  18.         {  
  19.             while (csv.Read())  
  20.             {  
  21.                 string idField = csv.GetField<string>(0);  
  22.                 string titleField = csv.GetField<string>(1);  
  23.                 titleField = JsonConvert.ToString(titleField, '\"', StringEscapeHandling.EscapeHtml);  
  24.   
  25.                 Console.WriteLine("Uploading " + titleField);  
  26.   
  27.                 IDocumentQuery<dynamic> query = client.CreateGremlinQuery<dynamic>(graph, $"g.addV('movie').property('id', '{idField}').property('title', {titleField})");  
  28.                 while(query.HasMoreResults)  
  29.                 {  
  30.                     await query.ExecuteNextAsync();  
  31.                 }  
  32.             }  
  33.         }  
  34.     }  
  35.     catch (DocumentClientException ex)  
  36.     {  
  37.         Console.WriteLine(ex.Message);  
  38.     }  
  39. }  

For the collection, same strategy is followed. We get a DocumentCollection instance trying to create the Movies collection if it doesn't exist or just fetching the existing one if there is one already. Then we start reading the csv file. We use the same DocumentClient instance to create a gremlin construct to add a vertex/node in the collection for each of the movies. We add a Movie label along with the title and id property as promised. I want to focus a little bit in the gremlin/tinkerpop construct we used here.

The full construct in a more readable format is:

  1. g.addV('movie')  
  2.     .property('id''{idField}')  
  3.     .property('title', {titleField})  

Let's ignore the idField and titleField as we already know what is there. The first construct g stands for the graph in the collection. addV('label') is the method construct to add a vertex. You can see the whole construct is design wise fluent. The next two property(propetyName, propertyValue) construct adds two properties in the newly added movie node. Nice builder interface, isn't it? Pretty expressive. We are using the groovy standard constructs for gremlin. There are *other language constructs too, including javascript. And following the same approach, I uploaded the users in the sample code too.

By the time of writing this article, the whole connector library from nuget is in preview version and I don't have one for .net core. So, we need to sit this one out for .net core. Sorry for that. 

Connecting the vertices with edges

The next thing in line is of course to add edges that represents the relationship between an user and a movie. We label the relationships with 'rates' and also will add a property named weight and it's value will be the actual rating value given by that user.

  1. private async Task UploadReviews(DocumentClient client)  
  2. {  
  3.     try  
  4.     {  
  5.         Console.WriteLine("Uploading movie reviews");  
  6.   
  7.         DocumentCollection graph = await client.CreateDocumentCollectionIfNotExistsAsync(  
  8.             UriFactory.CreateDatabaseUri("graphdb"),  
  9.             new DocumentCollection { Id = "Movies" },  
  10.             new RequestOptions { OfferThroughput = 1000 });  
  11.   
  12.         Console.WriteLine("Connected to graph Movies collection");  
  13.   
  14.         Console.WriteLine("Reading review list");  
  15.         using (TextReader reader = new StreamReader("ratings2.csv"))  
  16.         using (CsvReader csv = new CsvReader(reader))  
  17.         {  
  18.             while (csv.Read())  
  19.             {  
  20.                 string userId = "user" + csv.GetField<string>(0);  
  21.                 string movieId = csv.GetField<string>(1);  
  22.                 float rating = csv.GetField<float>(2);  
  23.   
  24.                 Console.WriteLine("Uploading review for user " + userId + " to " + movieId + " with rating "+ rating);  
  25.                 IDocumentQuery<dynamic> query = client.CreateGremlinQuery<dynamic>(graph, $"g.V().hasLabel('user').has('id', '{userId}').addE('rates').property('weight', {rating}).to(g.V().has('id', '{movieId}'))");  
  26.                 while (query.HasMoreResults)  
  27.                 {  
  28.                     var result = await query.ExecuteNextAsync();  
  29.                     foreach (var item in result)  
  30.                     {  
  31.                         Console.WriteLine(item);  
  32.                     }  
  33.                 }  
  34.             }  
  35.         }  
  36.     }  
  37.     catch (DocumentClientException ex)  
  38.     {  
  39.         Console.WriteLine(ex.Message);  
  40.     }  
  41. }  
The only noticeable change here in this snippet that we created edges between a movie and an user vertex. Like before, let's zoom in the gremlin construct we used this time to create an edge between two nodes.
  1. g.V()  
  2.     .hasLabel('user')  
  3.     .has('id''{userId}')  
  4.     .addE('rates')  
  5.     .property('weight', {rating})  
  6.     .to(g.V()  
  7.     .has('id''{movieId}'))  

The first enumeration g.V() should enumerate all the vertices in the graph. We need to filter the user vertex first to create an edge from. The next hasLabel(''user) filters all the user vertices. Subsequently .has('id', '{userId}') filters the vertex of the user we want bearing that user id. Then we use addE('rates') method to ensure that we add an edge labeled 'rates' from it. The following property('weight', {rating}) should add the weight property with the rating as value on the edge we just created.

The last thing we do is to tell where this edge points to. With to(g.V() .has('id', '{movieId}')) we filter out the movie node we want and use it to define which vertex the edge points to. Decent huh? You can find the full tinkerpop reference here.

By now, a single user rating of a movie should look something like,

Finally, we have all our data ready. Time to traverse this graph and make a simple movie recommendation for any user. :) 

The simplest movie recommender system in this world

Let's devise a simple movie recommender and we will follow the dumbest real life approach we see around. Usually when I pick a movie to see, I pick a movie based on the movies I have already seen and liked. If I liked Deadpool, there's a very good chance I will like Deadpool 2 and other superhero movies like Logan. Remember we didn't add/use any genre data in our graph. All we have here are the users, the movies and the ratings made by the users on these movies.

So, let's find out the other users who likes the same movies as our reference user do. Our reference user is the user who has asked for a recommendation for movies that he should see from us. He has only given us a list of movies he saw and rated. Our current traversed graph should look like this

We want to traverse the users who like the same movies our reference user likes using gremlin. To construct the query, first we need to get out of our user vertex and find out movies that our user likes. The gremlin construct for that will be

  1. g.V()  
  2.     .hasLabel('user')  
  3.     .has('id''user7')  
  4.     .outE('rates')  
  5.     .has('weight', gte(4.5))  
Let's assume our reference user id is user7. And at first we filter the vertices with label user and id user7. After that we use outE('rates') to traverse edges that is labeled rates and has weight more than or equal to 4.5. That's how we will land on the nodes that we think the user likes. But right now, we are standing on the edges. To land on the movie nodes, we have to use 
  1. g.V()  
  2.     .hasLabel('user')  
  3.     .has('id''user7')  
  4.     .outE('rates')  
  5.     .has('weight', gte(4.5))  
  6.     .inV()  
  7.     .as('exclude')  

inV() construct will enumerate all the attached movie nodes with the rates edges. The as('exclude') construct is used for a specific purpose. For now, all I can say is, I'm marking all the movies I saw as 'exclude'. We will see why I'm doing this soon.

So, now we want to find out all the other users who likes the movies our reference user likes.

So, this is where we want to end up. We found user2 and user3 likes the same movies almost as much our reference user likes. To progress through gremlin, our new gremlin query is

  1. g.V()  
  2.     .hasLabel('user')  
  3.     .has('id''user7')  
  4.     .outE('rates')  
  5.     .has('weight', gte(4.5))  
  6.     .inV()  
  7.     .as('exclude')  
  8.     .inE('rates')  
  9.     .has('weight', gte(4.5))  
  10.     .outV()  

We added a InE('rates') construct since we want to know now which inward edges with label 'rates' are pointing to these movies our reference user likes. We also filter the edges more with weight property value greater than or equal to 4.5 since we only want users who likes the same movies. At last we added outV() to find the users attached to these edges. Now we are standing on the users who likes the same movies our reference user does.

We want to know what other movies these users likes that our reference user has not rated or seen yet. I'm assuming our reference user rated all the movies he has seen.

From the graph above we can clearly see that user3 and user2 likes movie4 and movie5 that our current user has not rated yet. These are viable candidates for our users movie recommendation. It definitely seems naive, but it's a start. If you remember about the exclude nodes, we are going to use these now to make sure that our recommender doesn't recommend the movies our user already has seen. Our desired gremlin query is

  1. g.V()  
  2.     .hasLabel('user')  
  3.     .has('id''user7')  
  4.     .outE('rates')  
  5.     .has('weight', gte(4.5))  
  6.     .inV()  
  7.     .as('exclude')  
  8.     .inE('rates')  
  9.     .has('weight', gte(4.5))  
  10.     .outV()  
  11.     .outE('rates')  
  12.     .has('weight', gte(4.5))  
  13.     .inV()  
  14.     .where(neq('exclude'))  
We traveled the movies all these other user likes and made sure that we exclude the ones we have already seen using where(neq('exclude')). To fix our naivety a little bit, let's take the distinct movies using dedup() and order it by the movies by the amount of ratings it has received.
  1. g.V()  
  2.     .hasLabel('user')  
  3.     .has('id''user7')  
  4.     .outE('rates')  
  5.     .has('weight', gte(4.5))  
  6.     .inV()  
  7.     .as('exclude')  
  8.     .inE('rates')  
  9.     .has('weight', gte(4.5))  
  10.     .outV()  
  11.     .outE('rates')  
  12.     .has('weight', gte(4))  
  13.     .inV()  
  14.     .where(neq('exclude'))  
  15.     .dedup()  
  16.     .order().by(inE('rates').count(), decr)  
  17.     .limit(10)  
  18.     .values('title')  

This has to be really really naive to survive any production requirement. But it is indeed an eye opener on how Apache Tinkerpop and Azure CosmosDB Graph databases can do.

If you look on the final query quickly you will see we ordered the final movie nodes by the count of incoming rates edges it has and ordered it in a descending order using order().by(inE('rates').count(), decr). We limited the nodes to first 10 and we only took the titles of the movies 

Putting the system to test

I wrote a simple REPL to try out various gremlin commands towards our Azure CosmosDb. Our reference user id was 'user7'. The list of the movies seen by our reference user is

MoviesSeenByUser

If it is too hard to read, let's put out the movies here

  • "Braveheart (1995)"
  • "Star Wars: Episode IV - A New Hope (1977)"
  • "Shawshank Redemption, The (1994)"
  • "Wallace & Gromit: The Best of Aardman Animation (1996)"
  • "Wallace & Gromit: A Close Shave (1995)"
  • "Wallace & Gromit: The Wrong Trousers (1993)"
  • "Star Wars: Episode V - The Empire Strikes Back (1980)"
  • "Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)"
  • "Star Wars: Episode VI - Return of the Jedi (1983)"
  • "Grand Day Out with Wallace and Gromit, A (1989)"
  • "Amadeus (1984)"
  • "Glory (1989)"
  • "Beavis and Butt-Head Do America (1996)"

The movies our simple recommender suggested are

  • "Forrest Gump (1994)",
  • "Pulp Fiction (1994)",
  • "Fargo (1996)",
  • "Silence of the Lambs, The (1991)",
  • "Star Trek: Generations (1994) ",
  • "Jurassic Park (1993)",
  • "Matrix, The (1999)",
  • "Toy Story (1995)",
  • "Schindler's List (1993)",
  • "Terminator 2: Judgment Day (1991)"

Clearly, this is not the best recommender engine, but definitely one of the simplest. We can always reuse the genome data along with the dataset and the genre data and do a proper collaborative filtering. But the scope of this article was just to demonstrate the capabilities of a simple graph traversal.

Hope it was fun to read. Try out Azure Cosmos Db Graph API and Apache Tinkerpop if you can. They are really fun together to use too, and there's so much you can do using simple graph traversals.

The sample code is hosted here in GitHub and also attached with the article here. 


Similar Articles