How To Debug An Alexa .NET Core Skill

Introduction

Alexa skills can be developed using either Alexa Lambda functions or a REST API endpoint. Lambda function is Amazon's implementation of serverless functions available in AWS. Amazon recommends using Lambda functions; however, they are not easy to debug. While you can log to a CloudWatch log, you can't hit a breakpoint and step into the code. Debugging services, like RookOut for Node and Python, allow for live debugging, but there is no equivalent for C#. 
 
This makes live debugging of Alexa requests a challenge. One solution is to wrap meaningful long code in a .NET Standard class library and stand up both a REST API project for debugging and development and a Lambda function project for production deployment. This article outlines how to create an environment to debug a locally-hosted Web API that uses the same logic that is used by a Lambda function.
 
Structuring the Solution
 
This approach requires a minimum of three projects in the solution,
  • Lambda function project (.NET Core 2.1)
  • Web API project (.NET Core 2.1)
  • Class Library (.NET Standard 2.0)
This article assumes familiarity with creating Lambda functions and .NET Core Web API projects. The AWS Toolkit for Visual Studio includes Lambda function projects and can be used to create .NET Core Lambda functions. All meaningful logic should be contained in the class library project and referenced by both the Lambda function project and the Web API project. Both the Lambda project and the Web API project should be thin wrappers for the class library. 
 
For reference, the Whetstone.Alexa.EmailAddressChecker and the Whetstone.Alexa.AdventureSample GitHub projects follow this approach.
 
Configuring the Web API
 
By default, a Web API project uses the localhost when launching a debug session. Alexa requires a publicly accessible endpoint using SSL. This is neither.
 
How to Debug an Alexa .NET Core Skill 
 
Using nGrok
 
Ngrok is a free command line tool that temporarily exposes local development environments through public facing endpoint. This tunnel can be spun up and spun down with command line tools. The utility can be downloaded from ngrok downloads. You will need to sign up for a free account and place the ngrok executable in your command line path.
 
Once installed, it can expose the local port 54768 from the screenshot above using,
 
ngrok http 54768

How to Debug an Alexa .NET Core Skill
 
This spins up an endpoint at http://d6fac23f.ngrok.io and https://d6fac23f.ngrok.io and routes all requests to localhost:54768. This URL needs to be applied to the Alexa skill configuration page. Navigate to the Alexa skill to debug, select Endpoints and apply the URL,
 
How to Debug an Alexa .NET Core Skill 
 
Make sure to select the second option that says "My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority." The other two options prevent the Alexa test harness from reaching the ngrok public endpoint. Make sure to enter the fully qualified route to the Web API controller that handles the Alexa request. In this case it's,
 
https://d6fac23f.ngrok.io/api/alexa
 
The configuration console will remind you to rebuild your skill model. Return to your skill interaction model configuration and select Build Model.
 
How to Debug an Alexa .NET Core Skill
 
Now, you should be able to,
  1. Launch the Web API project in a debug session in your local development environment
  2. Test the skill in the Alexa test harness 
If the Windows firewall blocks the request, this can be corrected by using netsh to grant public access to port 54768 by launching a command prompt with administrative privileges and executing,
 
netsh http add urlacl url=http://localhost:54768/ user=everyone
 
Access can be revoked with,
 
netsh http delete urlacl url=http://localhost:54768/
 
Make sure to include the trailing slash on the URL, otherwise, netsh will complain of an invalid argument exception.
 
Every time ngrok opens a tunnel using the free tier, it uses a different domain name. The sample above uses d6fac23f.ngrok.io. The next time it runs, the domain name will be different. Start the ngrok tunnel once, update the Alexa skill endpoint, and leave it running while launching multiple debug sessions.
 
Using a Subdomain
 
Depending on how often you need to debug Alexa skills, changing the Alexa skill configuration each time you use ngrok could be troublesome. For the Basic $60/month subscription on ngrok, you can allocate a custom domain name. I found it worthwhile and now, I can launch ngrok using,
 
ngrok http 54768 -host-header="localhost:54768" -subdomain=whetstone
 
Monitoring Requests and Response
 
While ngrok is running, it provides a monitoring endpoint at localhost:4040 on the development environment. This is useful for monitoring the requests and responses.

How to Debug an Alexa .NET Core Skill 
 
This is useful while using the test harness to debug incorrectly formatted responses. For example, the Alexa test harness does not display the response from the Web API if the format is invalid - the ngrok monitoring endpoint does.

How to Debug an Alexa .NET Core Skill 
 
This is particularly helpful when checking on requests originating from sources other than the Alexa test harness. Amazon recently enabled skill events that can be invoked outside the context of a user session. For example, the skill can optionally receive a request when a user enables or disables the skill manually in the Alexa mobile app or website and when the user grants or revokes access to personal profile data, like their email address or phone number. None of this can be tested in the Alexa test harness. The skill must be available to beta users who trigger and raise these events and the messages they generate can be captured with ngrok.
 
Summary
 
This article has demonstrated how to use ngrok to actively debug Alexa skills and touched on how to structure the solution to reuse common code across a Lambda function and a Web API. It also shows how to use ngrok's monitoring services and use a custom domain. In another article, I'll talk about how to set up the two wrapper projects to share the same configuration between the Lambda function and the Web API using dependency injection.