VS Code Setting Remote Development on Raspberry Pi

This is part of my series on learning to build an End-to-End Analytics Platform project.

TLDR; The goal is to work remotely on the Raspberry Pi. We added the VS Code Remote Development extension pack. Used the Remote – SSH extension which is part of the pack to connect to the Pi remotely over the network. Set up set up key based/passwordless SSH authentication. Then to remember the host we added an entry to our SSH config file through VS Code. Finally, got started with the Sense HAT and wrote some code to do stuff on the Pi!

Working remote ๐Ÿง‘โ€๐ŸŒพ

We want to work remotely on the Pi (the artist formerly known as Raspberry Pi ๐Ÿ•บ). Sticking with VS Code, there is an extension to help us do remote development:

VS Code remote extension text.
Remote extensions

Thisย Remote Development extension packย includes three extensions:

  • Remote – SSH – Work with source code in any location by opening folders on a remote machine/VM using SSH. Supports x86_64, ARMv7l (AArch32), and ARMv8l (AArch64) glibc-based Linux, Windows 10/Server (1803+), and macOS 10.14+ (Mojave) SSH hosts.
  • Remote – Containers – Work with a separate toolchain or container based application by opening any folder mounted into or inside a container.
  • Remote – WSL – Get a Linux-powered development experience from the comfort of Windows by opening any folder in the Windows Subsystem for Linux.

Now the one we are after for the time being is the Remote – SSH extension which allows me to connect to the Pi over SSH. It’s not as simple as that though. Look at the architectures supported. We need to make sure our Pi has one of those architectures. Looking at the extension documentation we can see it supports: ARMv7l (AArch32) Raspbian Stretch/9+ (32-bit).

When we previously SSH into the Pi we get a glimpse of the architecture version:

Command line text with Raspberry Pi version.
Version dePi

To get the OS version we can use the cat command to find the release information:

cat /etc/os-release
Command line text with Raspberry Pi version.
Buster Pi

We have a supported configuration. Let’s try to connect remotely via SSH to our Pi with VS Code using our new extension pack. Open the Command Palette and type “remote-ssh”. Look for the “Connect to Host…” option:

VS Code command prompt with text.
Contact ๐Ÿ“ก

Then select the platform for the host. In our case Linux. Raspbian, the Pi-specific OS, is a Linux distribution based on Debian. Fill in the login credentials to finish establishing a connection.

One thing I spotted was the prompt in the bottom right corner of VS Code which was connecting via SSH, nice, and “Installing VS Code Server”. Wait, what? Which looking at the architecture, seems related to the way VS Code Remote Development works. A bit more than I am going to dig into here:

VS Code status bar with installation prompt.
Deploying

Eureka! We are connected remotely to the Pi! Take a look at the VS Code status bar in the bottom left. Notice that it says “SSH:<ip of Pi>” and the Terminal window is a bash shell running connected to the Pi.

VS Code terminal with remote SSH connection.
Ssh.. Terminal likes Pi too

Now that we are connected remotely to the Pi. Let get’s started with the Sense HAT. First things first, software updates. I am learning about Linux as I go here. Standard users by default aren’t allowed install applications on a Linux machine. To update the software we need to escalate privileges. The “Run as Administrator” in Linux terms seems to be “sudo“. I’m team “super user do” just sounds epic. Then apt-get update/upgrade to invoke the package handling utility to update or install the newest version of packages on the system.

sudo apt-get update
sudo apt-get upgrade
VS Code terminal with installation status messages.
Yep.. updates

I ran both commands (only one shown for brevity). They ran like a charm ๐Ÿ€. The upgrade pulled all it needed, created a few diversions, seemed to unpack it’s bags, set itself up, and process what just happened. I don’t know about you, but our Pi seems to have gone through a big phase in it’s life ๐Ÿคฃ. Our Software is updated. Now to install the sense HAT software:

sudo apt-get install sense-hat
VS Code terminal with package status messages.
Already the new and shiny

Nice! Our sense HAT software is up to date. Let’s start writing some code. What is awesome though is that there is an online emulator (trinket.io) that you can use to code the sense HAT in your browser if you don’t have one! Next up, we figure out how to set up new directories for the code with mkdir:

mkdir repos
cd repos
mkdir sense-hat-noob
cd sense-hat-noob

Now once that’s done, VS Code can pick up the new folder and we can open it using the regular “Open Folder” option in the “File” menu:

VS Code command prompt with filepath text.
The Pi files

Short detour here. You might keep getting prompted for the user password. That got annoying fast so I set up key based/passwordless SSH authentication. Then to remember the host we can add an entry to our SSH config file through VS Code.

VS Code SSH target extension.
Remember me

Now to add a file and write some code. To do that we are using the touch command. The moment we do that, VS Code noticed it’s a Python file extension and popped-up to ask if we want to install the recommended Python extensions.

VS Code remote Python extension text.
Why did it have to be snakes? ๐Ÿ๐Ÿค 

What is interesting is that it suggests the extension should be installed on the Pi ๐Ÿค” Again, this is related to the architecture for remote development. I tried not smiling about this, but apparently this extension has preferences…

VS Code remote Python host extension text.
There not here

Okay. We have reasonably secure remote connectivity, remote extensions, code file, no we need some code. Thanks to the new remote extension we have IntelliSense(Pylance) running:

VS Code Python Pylance intellisense.
I sense intellisense

I wrote some code to display a message on the LED face. There are a bunch of parameters for the function, let’s keep it simple here:

from sense_hat import SenseHat
sense = SenseHat()
sense.show_message("Hello world")

Viola! This sparks joy! The Pi displays it elation! I got a snap of the exclamation marks scrolling across the screen (pretty cool catching the one LED off as the characters move across the face).

Raspberry Pi Sense HAT led display with exclamation marks.
Spark Joy

That’s it for this one. Tons of new learning experiences. We are making progress! If you have any ideas or suggestions on things that can be done better. Let me know.

Until next time, take care.

๐Ÿœ


End-to-End Analytics Platform – IoT with Pi ๐Ÿฅง

This is part of my series on learning to build an End-to-End Analytics Platform project.

TLDR; My very first Pi and sense HAT was graciously gifted to me by Jonathan Wade (LinkedIn). I assembled it. Tried to go headless. Ended up adding a head because reasons. Ran through initial setup and updates. Configured OpenSSH on Windows and SSH on the Pi to get to the headless state.

I got gifted something amazing! Yup, my first Raspberry Pi. Not only that, a Sense HAT too! Now for many people that might mean much, but this is a pretty big moment for me. To save you from another unboxing experience I took the liberty by doing that privately and cut to the end. Behold! The unboxed product:

Raspberry Pi device parts on a table.
Bare metal

Looking through what we have here. The Raspberry Pi 4 Model B (top left), the Raspberry Pi Sense HAT (bottom middle), a SanDisk 32GB microSD card, some spacers, screws, power cable, and a HDMI to mini HDMI cable.

Raspberry Pi device and Sense HAT assembled
Fresh off the factory floor

Assembling the unit was really simple. Just a quick look at the Sense Hat board and we can see some amazing things:

  • Air Pressure sensor
  • Temperature and humidity sensor
  • Accelerometer, gyroscope, and magnetometer
  • 8×8 LED matrix display
  • Even a small joystick!

This device is pretty EPIC and it’s not even powered it on yet. So many things I haven’t ever worked with but can’t wait to try and figure them out.

Raspberry Pi Sense HAT LED lights on
Like a diamond ๐Ÿ’Ž

Next up power and networking. The moment I connected the power a rainbow ๐ŸŒˆ filled the room. A sign of a pot o’learnings ๐Ÿช™ to be found at the end of this experience.

Nice! Now we have the whole unit assembled. What’s the plan? Well, the thinking is to use this to deploy and run Azure SQL Edge on it. Why? A few reasons:

  • I have never worked with a Raspberry Pi
  • I haven’t really work on Linux at all
  • I have never done any work with Azure IoT solutions, or IoT at all for that matter
  • I do know Azure SQL reasonably well, though not Azure SQL Edge
  • Azure SQL Edge has a bunch of interesting things data streaming, time series analysis, and ONNX AI/ML capabilities. None of which I have worked with.

I didn’t connect any screen at this stage. It’s known as “headless”. We need a way to connect to the Pi though. We have the Windows Terminal and found that we can use SSH to connect to the Pi from a Windows 10+ machine.

Terminal with OpenSSH installation
SSHHHHHH…

That didn’t work. Apparently SSH has been disabled by default. Considering I don’t have a microSD card reader, it’s time to put a “head” on nearly headless Pi and connect a screen ๐Ÿ–ฅ๏ธ. The HDMI cable, a keyboard, and a mouse later and we are connected. I ran through the setup, updated the password, downloaded the latest updates, then set up SSH. There are other security best practices that I am going to follow as well after this post. Then tried to connect again and…success!

Terminal with successful SSH connection message
Connection suck seeds! ๐ŸŒฑ

Next I shut the Pi down. Disconnected the screen, mouse, and keyboard. I’m going to try work on this device remotely so I don’t need those peripherals right now.

Now that we have an IoT device I am going to start exploring if there are any open data sets that I can start using and feed some of the device telemetry into the end-to-end analytics solution as a cohesive project. We are going to set up additional services in our solution to support IoT device which will be fun.

Until next time.

๐Ÿœ

Infrastructure as Code (IaC) reuse

Photo by RODNAE Productions from Pexels

This is part of my series on learning to build an End-to-End Analytics Platform project.

TLDR; I made improvements to the Infrastructure as Code from the previous post by following best practices and promoting code reuse. Continued with parameters, but extended the code with scopes, modules, variables, functions, operators, and outputs. There is a list of Bicep best practices that is worth looking into.

Divide and conquer

We can use modules to group a set of one or more resources to be deployed together. We can reuse modules for better readability and reuse. They basically get converted to nested ARM templates from what I understand.

The first part that I want to move int a module is the data lake storage account and resolve dependencies. When that’s done, repeat the process for the other resources that we want to deploy.

Moving day.

Next up, update modules to use parameters and variables where possible to avoid hard coded values. We should be in a position where the module is bit of code that can be called with a set or parameters. Note that when the resources are in the same file, you can reference them directly. An example from my previous post was were I reference the datalake resource.

Same file resource references.

A module only exposes parameters and outputs to other Bicep files. When we move the data lake resource creation to a module, we need to leverage outputs which can then be passed between modules. The idea is to call a module -> deploy the resource -> output important things -> pass those things to another module as input parameters. So, the same property I referenced before now becomes an output in the module of the storage account:

Output for output.

Output variables can now be used in the main script as inputs to another module, etc. We just reference them using the module.output syntax.

Outputs as inputs.

We use operators in our deployments for things like conditional deployments.

On one condition.

Expanding on the use of parameters and variables, functions are a great way to drive flexibility and reuse into your deployments. Getting runtime details, resource references, resource information, arrays, dates, and more. Just remember most work at all scopes, some don’t. When they don’t you will probably figure that out with errors. One way to use them is to inherit the resource group location during resource deployment. In our case, setting variables with the resource group location, appending a deterministic hash string suffix for the storage account name from the resource group, or even enforcing lower case of names then using the variables for deployment.

Variables and functioning captain ๐Ÿ‘ฉโ€โœˆ๏ธ

FYI, the weird looking string notation ‘${var}‘… that’s call ‘string interpolation‘. Pretty simple compared to other ways I’ve had to write parameterised strings before with all kinds of place holders, parameters, and functions. I like!

As a good practice we use parameter decorators to control parameter constraints or metadata. Things like allowed values, lengths, secure strings, etc.

Prettier.

What we do next in our main deployment file is to change the scope. That way we can deploy at the subscription level which let’s us create resource groups in bicep instead of the Azure CLI which we did in the previous post.

Scoping things out ๐Ÿ”ญ

Note: It’s preferred in most cases to put all parameters/variables at the top of the file.

One other point of interest is that when we change the scope, our module to deploy resources error because they can’t be deployed at the subscription level only the resource group level. Make sense. So we need to change their scope in deployment.

Scope inception.

Polishing up the current solution with these practices was good learning. I continued with the approach across all modules and files. Then ran a few tests to make sure the resources deploy as expected.

That covers it off for this post. What I think we will do next is work on setting up a CI/CD pipeline in GitHub to build and deploy these resources into Azure.

๐Ÿœ

End-to-End Analytics Platform – Infrastructure as Code (IaC)

Photo by John Nail from Pexels

This is part of my series on learning to build an End-to-End Analytics Platform project.

TLDR; I set up a GitHub milestone with two issues. Started working with the Bicep language to build Azure resources. It’s basically a language that simplifies building Azure Resource Manager (ARM) templates. I installed the Bicep tooling. Defined resources using things like parameters, modules, and others using an ARM template guide. Used Bicep build to generate an ARM template from the Bicep file. Experimented with Bicep decompile to generating a Bicep file from an ARM template. Created my first gist to share some code. Lastly, used the Azure CLI to deploy the Bicep resource. Also… found a Bicep playground ๐Ÿคธโ€โ™‚๏ธ Just saying..

Prepping for Dev ๐Ÿ‘จโ€๐Ÿ’ป

We are using the development flow from my previous post. Not enough time? Check out the GitHub Flow.

We need a starting point to build out our end-to-end analytics platform. We are going to attempt to deploy a Azure Synapse Analytics and required services with Bicep templates. This gives us two key capabilities:

  • Data Lake Storage to store our data
  • Pipelines to support orchestration and batch ingestion of data

Let’s get started. We created a new issue, updated the project board, set up our new branch in GitHub. Pulled the updates locally. Then checkout to that new branch.

Getting the hang of issues.

In this post though I wanted to learn about GitHub Milestones. They make it easy track a bunch of related issues. They also have convenient progress tracking built in. So I added another issue. Then made my way to the milestone page from the issues tab:

More issues.

Used the ‘New milestone’ button to create a new milestone. Gave it an name and filled in the details.

A milestone. Yep.

After that jump back to the issue and assign it to the milestone we just created. Notice that the milestone has a progress bar.

I will walk 500 miles.

Nice. Issues, milestones, branches, in the flow. Time to get to building things.

Building Biceps ๐Ÿ’ช

To work with Bicep files we need to install the Bicep tools (Azure CLI + Bicep install, VS Code Bicep Extension) . Once that’s done, we add our first .bicep file ๐Ÿฆพ to the project. Remember to check which branch you are on locally.

Flexing our first Bicep file.

Stepping back, according to the documentation, Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. We not covering every area of Bicep here. The documentation does a good job of that. There are a few things that we am going to use in this post:

There are a bunch of other capabilities that you can explore from loops, functions, and more. So much goodness, so little time… someday maybe.

Now to start building out our resources. Let’s start with some parameters:

Intellisense integration for Bicep development.

Yes please! Intellisense for the win! I also added a comment which is being kind to my future self. Now let’s add a resource. The intellisense really helps a bunch to expedite development. We can get to the resource, the API version, and more using the Tab key. Another nice thing is using Ctrl+Space to expand more options, properties, and more:

Building a Bicep resource.

We have some basic building blocks for figuring out how to create a resource. Next I expanded the declaration with more resources, parameters, comments, and properties using the template documentation and Azure-Samples.

Note: You could try create a Synapse Analytics Workspace using the portal and grab the template just before you create it as well.

Notice that you can reference the parameters in the resource declaration which helps with code reuse. I added a deployment condition to control what services get deployed. The code is in an intermediate state to showcase that I can use strings or parameters to assign values. I’ll gist show you what I did ๐Ÿ˜‰:

/*Global parameters*/
param resLocation string = resourceGroup().location
/*This controls if we deploy the resource our not*/
param deployDataLake bool = true
param deploySynapse bool = true
/*Resource specific parameters – Synapse Analytics*/
param synapsWorkspaceName string = 'fancy-name'
param synapseSqlAdministratorLogin string = 'majestic-username'
param synapseSqlAdministratorLoginPassword string = 'your-complex-password'
/*Create a data lake storage account which we use as the Synapse Analytics default data lake*/
resource datalake 'Microsoft.Storage/storageAccounts@2021-04-01' = if (deployDataLake == true) {
name: 'fancy-name'
location: resLocation
sku: {
name: 'Standard_LRS'
tier: 'Standard'
}
kind: 'StorageV2'
properties: {
isHnsEnabled: true
supportsHttpsTrafficOnly: true
accessTier: 'Hot'
networkAcls: {
defaultAction: 'Allow'
bypass: 'AzureServices'
virtualNetworkRules: []
ipRules: []
}
encryption: {
services: {
blob: {
enabled: true
}
file: {
enabled: true
}
}
keySource: 'Microsoft.Storage'
}
}
}
/*
I built this child resource by wroking my way back through these templates: https://github.com/Azure-Samples/Synapse/tree/main/Manage/DeployWorkspace/storage
It get's a little tricky, but we are building a dependency chain of parent-child resources. e.g. Storage account -> Blob -> Container
*/
resource blobService 'Microsoft.Storage/storageAccounts/blobServices@2021-04-01' = if (deployDataLake == true) {
parent: datalake
name: 'default'
properties: {
cors: {
corsRules: []
}
deleteRetentionPolicy: {
enabled: false
}
}
}
resource container 'Microsoft.Storage/storageAccounts/blobServices/containers@2021-04-01' = if (deployDataLake == true) {
parent: blobService
name: 'workspace'
properties: {
publicAccess: 'None'
}
}
/*Create a Synapse Analytics workspace*/
resource synapseWorkspace 'Microsoft.Synapse/workspaces@2021-04-01-preview' = if (deploySynapse == true) {
name: synapsWorkspaceName
location: resLocation
identity: {
type: 'SystemAssigned'
}
properties: {
defaultDataLakeStorage: {
/*I used the datalake resource and can use dot notation to reference information about it. This establishes a dependency.*/
accountUrl: datalake.properties.primaryEndpoints.dfs
filesystem: container.name
}
sqlAdministratorLogin: synapseSqlAdministratorLogin
sqlAdministratorLoginPassword: synapseSqlAdministratorLoginPassword
}
resource workspaceFirewall 'firewallRules@2021-04-01-preview' = {
name: 'allowAll'
properties: {
startIpAddress: '0.0.0.0'
endIpAddress: '255.255.255.255'
}
}
}

It’s a basic Synapse deployment. The goal is to start deploying using Bicep. We can add things like RBAC assignment for storage access, network configurations on the storage firewalls, and others.

To ship it ๐Ÿšข we can use the Azure CLI in the VS Code integrated terminal. The deployment is pretty simple. Login into your Azure subscription with the Azure CLI. Set your subscription context.

az login
az account list
az account set --subscription 'your-subscription-name-or-id'

Create a resource group in which we want to deploy the resources defined in the .bicep file. Bicep can do this which we will get to another day.

az group create --resource-group 'your-resource-group' -location 'azure-region'
Success.

Deploy the resources at a resource group level specifying the Bicep file path as our template file. Once you submit the terminal will indicate that the deployment is running. We should see a JSON summary output when it’s done similar to our resource group deployment.

az deployment group create --resource-group 'your-resource-group' --template-file 'path-to-your-bicep-file'
Deploying robots.

Checking the deployment in the Azure Portal is simple. Navigate to the resource group. On the ‘Overview’ page, there is a ‘Deployments’

Deployments are deploying.

If you keep following the trail, you end up at the deployment detail screen:

It’s working… It’s working! ๐Ÿš€

We can validate the resources are deployed in the Azure Portal:

Deployed!

Let’s close off one of our issues and see what it does to the milestone:

Milestone achieved โœ…

Awesome! To clean up, just delete the resource group ๐Ÿ‘

az group delete --resource-group 'your-resource-group'

Interesting finds

Bicep has nice capabilities for users coming from an ARM background is that you can use the Bicep build to have it build the ARM template ๐Ÿ˜‰.

az bicep build --file 'path-to-your-bicep-file'
Building ARMs from Biceps lol

If you have ARM templates, you can try out the Bicep decompile functionality to TRY (it’s not perfect, so no guarantees) convert your ARM templates to Bicep files.

az bicep decompile --file 'path-to-your-bicep-file'

Wow! Another lengthy post. Thanks for sticking around. We covered some serious ground. We learnt a bunch and kept building a foundation for our future work. Future posts we can tackle things like modules, advanced resource deployments, and deploying using GitHub actions which should be fun.

๐Ÿœ