Server Side Rendering React App with Deno

By

Image

Intro

Two of my favourites things are React and dinosaurs. In this article I will show how I’ve put them together to develop a server side rendering React application with Deno.

Project Setup

I will assume that we are all familiar with React and Deno. Knowing that Deno is pretty new, if you don’t know how to install it and how it works, I would highly suggest you to have a read at this great introduction before diving into this article.

Now let’s start creating the project structure and the files needed for this tutorial, I’m using Visual Studio Code but any editor will do. Open your terminal and type:

mkdir deno-react-ssr && cd $_
code .

This will create a new folder called deno-react-ssr and will open it with vscode. In this folder we will need to create three files, app.tsx that will contain the code of the React component, server.tsx for the server code and deps.ts will contain all our dependencies. Think of it as our version of a package.json. You will end up with a structure like this:

.
├── app.tsx
├── deps.ts
└── server.tsx

Setting up the dependencies

In deps.ts we will have to export all the dependencies needed for this application to run. Copy the following code and add it to your file.

// @deno-types="https://deno.land/x/types/react/v16.13.1/react.d.ts"
import React from 'https://jspm.dev/react@16.13.1';
// @deno-types="https://deno.land/x/types/react-dom/v16.13.1/server.d.ts"
import ReactDOMServer from 'https://jspm.dev/react-dom@16.13.1/server';
export { React, ReactDOMServer }
export { Application, Context, Router } from 'https://deno.land/x/oak@v4.0.0/mod.ts';

As you can see, in Deno you import the modules directly from a url. I’ve decided to import React and ReactDOMServer from jspm as suggested in the documentation for third party modules but you can use any other CDN that provides the same modules.

One unusual thing that may stand out to you could be this:
// @deno-types="https://deno.land/x/types/react/v16.13.1/react.d.ts"
Since we are using typescript, this line of code will inform Deno of the location of the types it needs to import and will affect the import statement that follows. A more exhaustive explanation can be found in the Deno Type Hint manual.

I’ve also decided to use Oak, a middleware framework for Deno's http server that also provides a router, so I’m importing all the modules we will use in the server in addition to the Context type that typescript requires.

Create your React component

This is how our app.tsx component will look:

import { React } from "./deps.ts";

const App = () => {
  const [count, setCount] = React.useState(0);

  const garden = {
    backgroundColor: 'green',
    height: 'auto',
    fontSize: '30px',
    maxWidth: '400px',
    padding: '20px 5px',
    width: '100%'
  };

  return (
    <div className="pure-g pure-u">
      <h2>My DenoReact App</h2>
      <button className="pure-button" onClick={() => setCount(count + 1)}>Add a 🦕 in your garden!</button>
      <p style={garden}>
      { Array(count).fill(<span>🦕</span>) }
      </p>
    </div>
  );
};

export default App;

As with any standard React component, we start by importing React from our deps.ts file.

Then we are going to declare our App component that uses hooks to implement a simple button counter that allows you to add as many dinosaurs as you want in your personal garden!

Setting up the Server

For the server I’m using Oak and the code in server.tsx will look like this:

import {
  Application,
  Context,
  React,
  ReactDOMServer,
  Router,
} from './deps.ts';

import App from "./app.tsx";

const PORT = 8008;

const app = new Application();
const jsBundle = "/main.js";

const js =
`import React from "https://jspm.dev/react@16.13.1";
 import ReactDOM from "https://jspm.dev/react-dom@16.13.1";
 const App = ${App};
 ReactDOM.hydrate(React.createElement(App), document.getElementById('app'));`;  


const html =
  `<html>
    <head>
      <link rel="stylesheet" href="https://unpkg.com/purecss@2.0.3/build/pure-min.css">
      <script type="module" src="${jsBundle}"></script>
    </head>
    <body>
      <main id="app">${ReactDOMServer.renderToString(<App />)}</main>  
    </body>
  </html>`;

const router = new Router();
router
  .get('/', (context: Context) => {
    context.response.type = 'text/html';
    context.response.body = html;
  })
  .get(jsBundle, (context: Context) => {
    context.response.type = 'application/javascript';
    context.response.body = js;
  });

app.use(router.routes());
app.use(router.allowedMethods());

console.log(`Listening on port ${PORT}...`);

await app.listen({ port: PORT });

As always we need to import all the dependencies we will use in our server. We will also import our App we created earlier, as you can see the extension .tsx is required in Deno so don’t forget it!
Next step is to create our Oak server application and we’ll also need to define some routes:

  • '/' will serve our HTML page that contains the rendered app.
  • '/main.js' will serve our application code that is needed to hydrate the client side React application.

Finally we tell our application to use the route we just created and start listening on port 8008. You can notice I’m also using router.allowedMethods(), it’s a middleware that lets the client know when a route is not allowed.

Run the application

Running the SSR React application we just created is extremely simple, you just need to use the following command:

deno run --allow-net ./server.tsx

Deno is built secure by default, that means that a Deno application will not be able to access your network, to overcome this we'll just need to use Deno's --allow-net flag.
Now the only thing missing is to open http://localhost:8008/ and enjoy your new App!

Conclusion

I hope you enjoyed the brief tutorial illustrated in this article and I’m looking forward to seeing what will happen next and how more complex applications can be built with this stack.

If you are still unclear about anything we’ve done or want a full reference of the code, here’s the GitHub repository.

A web developer’s guide to Google Cloud

By

Computer showing Google GSuite

At Potato we proudly use Google Cloud services to host our projects (except in cases where a client has a preference for another cloud provider) and have used App Engine since the beginning. This has had numerous benefits for us as a business, not least in the current remote working climate.

Our use of cloud services has allowed Potato’s engineers to focus on application development instead of maintaining servers. We can leave platform security and reliability in the hands of some of the best engineers in the world at Google, and at the same time take advantage of world-class availability, scalability and performance.

As with other cloud platforms, Google Cloud has a range of different services to help build and host web applications. At first glance the subtle differences between these services might not be apparent until you start using them, so I’ve put together this guide to give an overview of each service and to help you decide which is right for your use case.

We're going to look at Cloud Storage, Cloud Functions, App Engine, Cloud Run, Cloud Build and Kubernetes Engine. These services broadly come under the category of serverless architecture. I won’t go into exactly what it means here, for more information check out the Serverless computing section of the Google Cloud documentation. We’ll also cover containers, which are described in this Containers guide.

Static hosting with Cloud Storage

Cloud Storage is an object store used to save files in buckets for later retrieval. A typical use case might be that an App Engine app uses Cloud Storage to save and read user-uploaded files. It’s also possible to serve a static site directly from a Cloud Storage bucket with no server code using the gsutil command line tool, or the Google Cloud web interface.

This approach takes advantage of Google's CDN to serve content at incredible speed, and unlike other solutions below doesn’t suffer from cold start warm-up time. The key disadvantage is that static hosting doesn’t allow for server side programming, however Cloud Storage can easily be used in conjunction with other services (e.g. App Engine) so it’s very useful to bear in mind.

Useful for: static marketing pages; SPAs or PWAs with no back-end requirement.

Serverless functions with Cloud Functions

Cloud Functions are a lightweight alternative to running full back-end applications on App Engine or Cloud Run, and are ideal for use alongside statically served websites or in non-traditional UI contexts such as voice or chat.

Functions can be written in Go, Node.js or Python and are triggered by URL or an event such as when a file is uploaded to storage or when commits are pushed to a Git branch. Analogous to functional programming, Cloud Functions are designed to be stateless and to accomplish one task; multiple functions can be defined to break up complex or concurrent tasks. This makes functions powerful but will likely require substantial refactoring of complex apps.

It’s also worth noting that functions can scale with demand, but the first request after a period of inactivity may be slower because functions will automatically be put in a low resource state. This is known as a cold start.

Useful for: lightweight APIs written with Flask or Express.js; event-driven data processing e.g. image resizing; sending notifications with Firebase Cloud Messaging or email; chatbots and voice apps that fetch data from a third party API.

Serverless applications with App Engine

App Engine is Google’s cloud service for running sandboxed serverless apps. It comes in two flavours, standard and flexible; each has its own pros and cons but both offer powerful scaling to meet traffic demand.

Standard environment

App Engine standard powers the majority of projects at Potato. It supports applications written in Go, Java, Node.js, PHP, Ruby, and – our favourite – Python. App Engine apps make use of URL handlers defined in YAML to route traffic to different parts of the application. Because App Engine is scalable, new instances can spin up to handle increased demand in traffic.

Standard instances can also ‘scale to zero’ when there is no demand which can helpful in reducing billing costs. For this reason it’s important to be mindful of statelessness, similar to Cloud Functions. Unlike Cloud Functions however, App Engine is designed to accommodate complex applications and several ‘instance classes’ are available with different memory and CPU constraints.

Useful for: traditional server-based web applications such as those built with frameworks like Symfony (PHP) and Django (Python).

Flexible environment

App Engine flex allows you to use custom runtimes as well as those found in App Engine standard. Custom runtimes use Docker which allows you to run custom server code in a sandboxed container. Many predefined container images can be found on Docker Hub and in Google Container Registry.

App Engine flex apps can scale with traffic like standard apps, but they cannot scale to zero. This means apps only need to warm up when starting new instances since there is no cold start from zero, but also means apps will incur billing costs while idle. For this reason flex is less suitable for infrequently-used or hobby projects.

Useful for: traditional server-based web applications on custom runtimes e.g. .NET, Deno (make sure to read our An Introduction to Deno post next!)

Application containers with Cloud Run

If you need more flexibility over your runtime environment than App Engine standard allows, consider using Cloud Run over App Engine flexible. Cloud Run is a recent addition to the Google Cloud family which runs containers natively on Kubernetes Engine rather than in a Compute Engine virtual machine like App Engine.

Apart from faster warm up speed compared to App Engine flex, Cloud Run also has App Engine standard’s ability to scale the number of active instances to zero when not in use. This can be helpful in reducing billing costs.

Useful for: traditional server-based web applications on any runtime; Symfony (PHP), Django (Python), .NET, Deno.

CI/CD containers with Cloud Build

Cloud Build is not a service that can respond directly to HTTP requests but deserves an honourable mention because of how it fits into the Google Cloud serverless family. In many ways it’s a continuation of the event driven invocation model seen in Cloud Functions; builds are triggered by a linked Git repository (either a Cloud Source Repository, Bitbucket or GitHub).

Builds follow a step-by-step list of procedures that run in containers. This allows you to automatically run CI/CD tasks such as compilation, testing, and deployment to App Engine or Cloud Run, all by pushing commits or tags to a Git branch.

Useful for: unit testing; end-to-end testing; compiling assets; deploying to App Engine, Cloud Run or Kubernetes Engine.

Container clusters with Kubernetes Engine

The last service I want to talk about is Kubernetes Engine which is used for orchestrating multiple containers. As mentioned above, Cloud Run itself runs on Kubernetes Engine.

If you find that your application architecture has become sufficiently complex that you need more control over multiple microservice containers, or dynamic load balancing between different versions of your app, then this may be the solution for you. The obvious drawback is that you need to configure this architecture yourself and thus it requires more specialist DevOps knowledge. Also bear in mind that it’s possible to run multiple instances of Cloud Run or App Engine in the same project and this covers most complex use cases.

Useful for: complex application architectures; multi/microservice apps; orchestrating multiple runtimes as a single managed service.

Summary

The service that's right for you really depends on the complexity of your application.

  • If your application has a dynamic back-end and is written in Go, Java, Node.js, PHP, Python or Ruby, use App Engine standard.
  • If your application has a dynamic back-end but is written in a language or version unsupported by App Engine use Cloud Run.
  • If your application has a complex back-end which uses multiple runtimes or a microservice architecture consider multiple Cloud Run services or Kubernetes Engine.
  • If your application has no dynamic back-end consider using App Engine standard to serve static files or a dedicated static hosting service such as Firebase Hosting or GitHub Pages.
  • If your application doesn’t need a web front-end and only requires minimal stateless processing use Cloud Functions.

An Introduction to Deno

By

Since its introduction in 2009, Node.js has gained huge popularity and usage. But with that, issues with its ecosystem, feature adoption and dependency bloat have started to surface.

So, in true JavaScript community style, there's a new kid on the block: Deno 🦕

Image

What is Deno?

Deno is a new runtime for JavaScript and Typescript, built on Google's V8 engine and written in Rust. It was started by Ryan Dahl (who famously started Node.js) as an answer to the problems he saw with Node.js and its ecosystem.

Ryan announced the project a couple of years ago at JSConf EU during a talk in which he went into some detail about regrets he had over Node.js, particularly around decisions he did (or didn't) make along the way. It's definitely worth a watch.

Although seen as a Node.js successor, there are some major differences between the two:

  • Deno has no package manager.
  • Deno implements a security sandbox via permissions.
  • Deno has a standard library for common tasks.
  • Deno has first-class TypeScript support.
  • Deno will be able to be compiled into a single executable.

No package manager

Instead of the complex module resolution that Node.js supports, Deno simply uses URLs for dependencies and doesn't support package.json. Import a relative or absolute URL into your project, and it'll be cached for future runs:

import { listenAndServe } from "https://deno.land/std/http/server.ts";

Third party modules can be added to Deno's website via https://deno.land/x/.

Security

By default, a Deno application will not be able to access things like your network, environment or file system. Unlike Node.js, in order to give an application access to this sandboxed functionality you'll need to use one of the provided flags:

$ deno run server.ts --allow-write

You can see all of Deno's supported security flags by running deno run --help.

Standard library

Much like Go, the Deno team maintains a core, stable set of utilities in the form of a standard library. These cover utilities such as logging, http serving and more. If you need to implement a feature, it's probably best to check the standard library first to see if it's already supported.

You can see what's available in Deno's standard library via its source code.

TypeScript

Unlike Node.js, Deno has first-class support for TypeScript (most of its standard library is written in it). This means that ES modules and all the goodness of static typing are available right from the start, with no transpilation required on the user side. It's worth noting however that Deno still needs to compile TypeScript to JavaScript behind the scenes, and as such incurs a performance hit at compile time unless the module's already been compiled and cached.

If you'd rather not use TypeScript, Deno supports JavaScript files too.

Single executables

Although not implemented yet, one future ambition is to allow a Deno application to be compiled down into a single executable. This could vastly improve and simplify the distribution of JavaScript-based applications and their dependencies.

You can track the progress of single executable compilation on GitHub.

Running Deno

Now we know what Deno is, let's have a play with it.

The Deno website provides plenty of installation options, but since I'm using macOS I'll use Homebrew:

$ brew install deno

Once installed, deno should be available to use from your terminal. Run deno --help to verify the installation and see what commands it provides.

Deno also gives the ability to run applications with just a single source URL. Try running the following:

$ deno run https://deno.land/std/examples/welcome.ts

Download https://deno.land/std/examples/welcome.ts
Warning Implicitly using master branch https://deno.land/std/examples/welcome.ts
Compile https://deno.land/std/examples/welcome.ts
Welcome to Deno 🦕

Deno downloads the module from the provided URL, compiles it and runs the application. If you visit the above module's URL in your browser, you'll notice that Deno also provides a nice browser UI for the module's source code, which in this case is a simple console.log statement.

Of course running arbitrary third party code like this should always be treated with caution, but since it's an official Deno example we're all good here, and as mentioned above, Deno's security flags should help limit any potential damage.

You'll also notice that if you run the same command again, the welcome.ts module isn't redownloaded. This is because Deno caches modules when they're first requested, allowing you to continue work on your project in places with limited internet access.

If for any reason you want to reload any of your imports, you can force this by using the --reload flag:

$ deno run --reload https://deno.land/std/examples/welcome.ts

Building your first Deno app

To demonstrate a few of Deno's features, let's dive into a simple API example. Nothing too complicated, just a couple of endpoints. And in true Potato style, we'll use different types of spuds for our test data.

It's worth noting beforehand that this demo won't rely on any third party modules, and will use an in-memory data store. There are plenty of libraries (some are linked at the bottom of this article) that aim to make this simpler, but for now let's stick with vanilla Deno!

Setting up the server

Firstly, let's create a TypeScript file. Don't worry too much if you're not familiar with TypeScript, you can use plain JavaScript too. I'll create mine at server.ts.

Next, we need to set up a simple web server. As we've already seen, Deno has a standard library that contains some useful functions with one of these being the http module. Taking inspiration from Go, there's a helpful listenAndServe function that we can use:

import {
  listenAndServe,
  ServerRequest,
} from "https://deno.land/std/http/server.ts";

listenAndServe({ port: 8080 }, async (req: ServerRequest) => {
  req.respond({ status: 204 });
});

console.log("Listening on port 8080.");

What's happening here? Firstly, we import the listenAndServe method from Deno's http module, and the ServerRequest interface to allow TypeScript type checking. Then, we create a simple server that listens on port 8080 and responds to all requests with a HTTP 204 No Content response.

As mentioned above, by default Deno will prevent our application from accessing the network. To run this successfully, we'll need to use Deno's --allow-net flag:

$ deno run --allow-net server.ts

We can verify our application is running correctly using cURL in another terminal tab:

$ curl -i -X GET http://localhost:8080

HTTP/1.1 204 No Content
content-length: 0

Environment variables

To show how environment variables are passed to Deno, let's add support for a dynamic port number since this is a common use case amongst production servers. Deno provides the Deno.env runtime library to help with retrieving the current environment variables:

import {
  listenAndServe,
  ServerRequest,
} from "https://deno.land/std/http/server.ts";

const { PORT = "8080" } = Deno.env.toObject();

listenAndServe({ port: parseInt(PORT, 10) }, async (req: ServerRequest) => {
  req.respond({ status: 204 });
});

console.log(`Listening on port ${PORT}.`);

We can now pass a custom port to our application when running it. One thing to note here is that we need to convert the port variable to a number, since all environment variables are passed as strings and listenAndServe expects a number for the port.

When running this, we'll also need to use the --allow-env flag to grant the application access to our environment variables:

$ PORT=6060 deno run --allow-net --allow-env server.ts

Routes

For the sake of simplicity, we'll implement a very simple router ourselves using a good old fashioned switch statement.

Firstly, let's create some empty route handlers. We'll create two: one to allow a new spud type to be added to a list, and another for retrieving the current list. For now, let's return a HTTP 204 No Content response so that we can test our application along the way:

const createSpud = async (req: ServerRequest) => {
  req.respond({ status: 204 });
};

const getSpuds = (req: ServerRequest) => {
  req.respond({ status: 204 });
};

Next, let's create a handleRoutes method that'll act as our router:

const handleRoutes = (req: ServerRequest) => {
  if (req.url === "/spuds") {
    switch (req.method) {
      case "POST":
        createSpud(req);
        return;
      case "GET":
        getSpuds(req);
        return;
    }
  }

  req.respond({ status: 404 });
};

Here, we're checking every incoming request URL and method, and directing the request to the appropriate function. If neither the URL nor the method matches anything expected, we return a HTTP 404 Not Found to the user.

Finally, let's call the handleRoutes function from our original server and add a try statement around it to catch any errors and return an appropriate response:

listenAndServe({ port: parseInt(PORT, 10) }, async (req: ServerRequest) => {
  try {
    handleRoutes(req);
  } catch (error) {
    console.log(error);
    req.respond({ status: 500 });
  }
});

Using a try statement and catching errors in this way is usually a good idea with Deno, since unlike Node.js a Deno application will exit when it encounters an uncaught error.

We should now be able to send POST and GET requests to http://localhost:8080/spuds and get an expected HTTP response:

$ curl -i -X GET http://localhost:8080

HTTP/1.1 404 Not Found
content-length: 0

$ curl -i -X GET http://localhost:8080/spuds

HTTP/1.1 204 No Content
content-length: 0

$ curl -i -X POST http://localhost:8080/spuds

HTTP/1.1 204 No Content
content-length: 0

Create handler

Next, let's add an in-memory store for our spud types:

const spuds: Array<string> = [];

In order to process the incoming spud data, we'll need to be able to parse the request's JSON body. Deno doesn't have a built in way of doing this at the time of writing, so we'll use its TextDecoder class and parse the JSON ourselves:

const createSpud = async (req: ServerRequest) => {
  const decoder = new TextDecoder();
  const bodyContents = await Deno.readAll(req.body);
  const body = JSON.parse(decoder.decode(bodyContents));
};

What's happening here? Essentially, we're first using the Deno.readAll method to asynchronously read the contents of the request body (a Reader) as bytes. We then decode that into a UTF-8 string, and finally parse it as JSON. Phew.

We can then proceed to add the spud type to the store we created earlier, and return a HTTP 201 Created response. Our final create handler should look something like this:

const createSpud = async (req: ServerRequest) => {
  const decoder = new TextDecoder();
  const bodyContents = await Deno.readAll(req.body);
  const body = JSON.parse(decoder.decode(bodyContents));

  spuds.push(body.type);

  req.respond({
    status: 201,
  });
};

Get handler

To implement our GET handler, we'll essentially reverse the operation we wrote above by using Deno's TextEncoder. We'll then set the relevant header to "application/json" using Deno's Headers class and return the spud data with a HTTP 200 OK response:

const getSpuds = (req: ServerRequest) => {
  const encoder = new TextEncoder();
  const body = encoder.encode(JSON.stringify({ spuds }));

  req.respond({
    body,
    headers: new Headers({
      "content-type": "application/json",
    }),
    status: 200,
  });
};

Final application

Our final file should look a bit like this:

import {
  listenAndServe,
  ServerRequest,
} from "https://deno.land/std/http/server.ts";

const { PORT = "8080" } = Deno.env.toObject();

const spuds: Array<string> = [];

const createSpud = async (req: ServerRequest) => {
  const decoder = new TextDecoder();
  const bodyContents = await Deno.readAll(req.body);
  const body = JSON.parse(decoder.decode(bodyContents));

  spuds.push(body.type);

  req.respond({
    status: 201,
  });
};

const getSpuds = (req: ServerRequest) => {
  const encoder = new TextEncoder();
  const body = encoder.encode(JSON.stringify({ spuds }));

  req.respond({
    body,
    headers: new Headers({
      "content-type": "application/json",
    }),
    status: 200,
  });
};

const handleRoutes = (req: ServerRequest) => {
  if (req.url === "/spuds") {
    switch (req.method) {
      case "POST":
        createSpud(req);
        return;
      case "GET":
        getSpuds(req);
        return;
    }
  }

  req.respond({ status: 404 });
};

listenAndServe({ port: parseInt(PORT, 10) }, async (req: ServerRequest) => {
  try {
    handleRoutes(req);
  } catch (error) {
    console.log(error);
    req.respond({ status: 500 });
  }
});

console.log(`Listening on port ${PORT}.`);

Let’s give this a test:

$ curl -i --data '{"type": "maris piper"}' -X POST http://localhost:8080/spuds            

HTTP/1.1 201 Created
content-length: 0

$ curl -i --data '{"type": "king edward"}' -X POST http://localhost:8080/spuds            

HTTP/1.1 201 Created
content-length: 0

$ curl -i -X GET http://localhost:8080/spuds                            

HTTP/1.1 200 OK
content-length: 54
content-type: application/json
{"spuds":["maris piper", "king edward"]}

If you'd rather, you can view this file as a Gist or run it directly with the following command:

$ deno run --allow-net --allow-env https://gist.githubusercontent.com/dcgauld/205218530e8befe4dfc20ade54e7cc84/raw/9eff7733cf017f33b2bf3144937f97702ae4fc63/server.ts

We just created our first Deno application!

Conclusion

Hopefully this article has given you a glimpse into the world of Deno, and some inspiration to start using it for future projects. I'm excited to see what the future holds for the project, especially around things like single file executables and the potential to run certain Deno modules in the browser.

If you'd like to learn more about it and its features, I'd really recommend giving the Deno manual a read.

Useful links

We created our first Deno API with no third party modules, but there are many libraries out there already that aim to simplify that process. Some examples:

How Potato Code Remotely

By

At the time of writing, we’re in the middle of the COVID-19 lockdown. As with most other companies, the lockdown has required us at Potato to rapidly change the way we work day-to-day. We’re a tech company, so that has given us a big advantage in this rapid and unsettling change of routine. But as a company, we’ve historically encouraged our engineers to travel into our central London office most of the time. Working from home has always been a perk at Potato, but never the default position.

Fortunately we’ve been unintentionally preparing for this day by making the right choice of tools. In this post I hope to give an overview of our toolset, and how it’s allowed us to continue to operate as normal.

Cloud-First

Right from the beginning of Potato, we’ve always pushed to use tools that are hosted by others. We’ve never wanted to spend our time being sysadmins, and we’ve always resisted the urge to host anything inside our offices. We wanted all of our stuff to be available anywhere, from any platform. In the past the option of running a VPN, and hosting software internally has come up, but we’ve steadfastly refused the temptation to do so - the risk of losing access, of having to maintain it, and of a single point of failure (what if the office burned down?!) were all reasons to avoid it.

Also, we wanted to make sure that any of our Engineers could deploy any of our projects from anywhere with an Internet connection. In the past we’ve deployed from trains, planes, hotels, and on one occasion a long while back - a locked bathroom (long story!).

Finally, if anyone ever had their laptop stolen out of the back of a van while climbing (true story), nothing would be lost if everything is saved online.

For this reason, (along with our Google history) we’ve based all of our internal systems on Google’s GSuite. Potato runs on Google Docs, Sheets, Calendar and of course Gmail. This has given us access to everything from anywhere, and the decision to base everything around GSuite has allowed us to continue working seamlessly despite now all being remote.

Slack / Google Meet

For communication, we’ve been long-time users of Slack, (although before this we were pretty sold on Hipchat). As it’s cloud-based we’ve just continued messaging as normal without disruption (though probably making a bit more use of threads to keep things organised).

For video calls we’ve always used Google Meet (previously Hangouts). The number of ad-hoc catch-ups has increased, and we hold daily drop-in sessions for people who just want a bit of socialising. On Fridays we’ve rebooted our traditional TPIF (Thank Potato It’s Friday) into a virtual mass video call, armed with beverages of choice and reams of friendly banter!

GitLab

A couple of years ago we moved all of our development over to GitLab. The intention was to bring all of our end-to-end product development life-cycle under one tool. Although we had the option of hosting GitLab ourselves, using GitLab.com’s hosting has saved us the overhead of managing it, and of course it means that our project management tools are available from anywhere.

Now that we’re all a bit more involved in using GitLab directly day-to-day, we’ve also implemented a Chrome/Firefox extension that makes things a little easier.

Visual Studio Code - Live Share

Code editing with Visual Studio Code

The ability to grab another developer to help solve a problem has always been a key part of Potato culture, and historically one that required that other dev to be in the same room. When we were forced into remote working this was one area where we needed to find a new tool to help us, and we did so by using Visual Studio Code’s “Live Share” plugin. This plugin lets you host a programming session that you can invite others to collaborate in (think Google Docs, but for code editing). Combined with a Google Meet call it makes for a great, and simple pair-programming experience.

Figma

The final tool in our toolbox is Figma. Until recently our designers were heavily invested in using Sketch as their main design tool, but we’d noticed issues even ahead of the lockdown that made this problematic.

Firstly, it broke the cloud-first rule. Sketch ran locally, saved files locally, and it wasn’t cross-platform which caused a load of headaches for our Linux-using developers. We came up with workarounds for these things (e.g. using the “Measure” plugin for detailed exports, teaching the designers how to use Git etc.) but it never really fitted the culture of Potato.

Then Figma came along, and not only did the designers find that it was a better tool, but it’s based in the cloud and so is platform-agnostic.

Honourable Mentions

In addition to our development toolset, we use a number of other web-based tools day-to-day, these include:

  • Harvest + Forecast - for time tracking and project scheduling
  • Bamboo - for managing our people
  • Trello - For workshopping and lightweight task tracking
  • TrakStar - For managing the career progression of our people

Summary

There we are, a complete toolset for seamless remote working - that were built up accidentally by just following the rule that our stuff should be accessible from anywhere, and we don’t want to be the ones running it!

Parcel and Rust: A WASM Romcom

By

Web Assembly (WASM) and Rust have been growing and evolving over the last couple of years, so what’s it like to use them together?

Introduction

I’ve wanted to use Rust and WASM for a while due to a number of reasons: small bundle size, low-level access with reliable performance, and all the perks that come with Rust (strong type safety, zero-cost abstractions, etc.). So, when I was presented with the opportunity of 2 weeks off-project learning, there was no excuse not to give Rust and WASM a try! What followed over the next 2 weeks or so was a bit of a programming rollercoaster for me, something all programmers have been through many times. But when it came time to write my experiences up for this article, I noticed there was a pattern, this experience wasn’t just any rollercoaster…it mapped perfectly to the structure of a Romcom! So, to explain and analyse this not officially supported pairing of a web application bundler and a systems programming language, we will be following the standard 10 part Romcom format, for structure and a bit of comedic relief.

Part 1: “An Unfulfilled Desire”

Another reason I wanted to use Rust and WASM is because it was new and shiny, plus it would be nice to hook up the Rust program to a nice web interface. One problem, Rust and WASM is only officially supported with Webpack as a bundler. To me, Webpack was that Ex in a Romcom, it was never a good relationship and we could never make it work. But, it seemed that it may be a necessary evil to reach my goal of making something using my lost love, Rust.

Part 2: “Meet cute”

So, I grudgingly start to clone the Rust WASM Webpack template as I’m flashed back to a previous project, watching myself as I battle with Webpack trying to compile a Single Page App (SPA). The list of dependencies growing with every plugin. I spam Ctrl + C, “No there must be something else” I think. And that's when it hits me, “Parcel! I remember it saying something about WASM?” I think as I quickly navigate to the Parcel website, and there it is, this is what I have been looking for, and after a quick npm install, I’m head over heels.

Part 3: “Happy Together”

One npm init and npm install -D parcel-bundler later and we are off to the races. Parcel supports importing .rs files in JS and TS out of the box so in a simple HTML5 boilerplate with a main.js I do just that. The rust file contains a simple function which when given 2 numbers returns their sum, some extra Rust to tell the compiler not to mangle the function name and it’s done! The JS calls this function and displays the output in the DOM, a simple example but this seems to have everything I need…

Part 4: “Obstacles Arise”

But, as with most romcoms, a bump in the road pulls the relationship into question. For Rust and Parcel, this issue was returning or accepting strings in functions. No matter what I did, it wouldn’t work and streams of undefined would plague my console. Although there seemed to be a solution, the well supported “wasm_bindgen” package provides a polyfill for things missing between Rust and JS! So, make a Rust project with a cargo.toml, add the wasm_bingen crate, import and run… oh wait. Parcel doesn’t seem to work with wasm_bindgen even with a plugin someone on a GitHub issue cites as the solution...what now?

Part 5: “The Journey”

After a few minutes of frantic Googling and skim reading GitHub issues and various tutorials on blogs I find an alternative package, stdweb. Seems to have most of the functionality of wasm_bindgen and a handy tutorial with how to set it up with Parcel! A quick switcheroo of the packages in the cargo.toml, some slight code tweaks and we are back on course with strings being returned and received in this simple app. Time to start making something slightly more complex...a simple genetic algorithm simulator! Image

Part 6: “New Obstacles”

Okay so new project, Parcel - installed, Rust module - created, stdweb - installed lets get this show on the road! In my head the idea is simple, make a struct in Rust to represent the Genetic Algorithm Simulation, add some methods to it to get the population or simulate a generation, and then simply instantiate and use it in JS. Can’t be too hard surely (foreshadowing)! Lets just make the struct, seems to be instantiating in JS, lets add some methods onto the struct… wait...what? It seems exporting structs is temperamental at best when using stdweb and parcel am I back to square 1 already?

Part 7: “The Choice”

All seems lost, I’m out of viable Rust packages to try and have a console littered with errors, is there nothing I can do? In a last ditch effort I tried manually compiling the .wasm file myself and importing it but after 5 edits to the Rust file I can already feel this getting tedious… As I crawl through GitHub issue after GitHub issue webpack comes up again and again as the solution, maybe I need to accept defeat and go back.

Part 8: “Crisis”

F@$% I’m going to have to use Webpack, I think as I go back to the start and open the Webpack Rust template, defeated.

Part 9: “Epiphany”

As the Webpack Rust template repo clones I took to Google one last time, using one of my old searches, hoping for a miracle. Wait, what's this? A GitHub issue about Parcel and WASM_Bindgen which wasn’t there before? The Google search index must have only just found this to be relevant? Hold on, someone has linked a template here for Rust, WASM_Bindgen, and Parcel! Thank the Search Engine Gods the project may be saved!

Part 10: “Resolution”

There it was, under my nose the whole time on the rustwasm GitHub repository. I quickly cloned the repo and followed the set-up instructions and it all worked flawlessly. In the end this search had become a real Cinderella story with the perfect match being found on the stroke of midnight. So now, time to make something cool with it! I didn’t want to focus too much on the front end and slaving over SCSS making it look nice, so I turned to an old friend: TailWindCSS, a utility-first CSS framework which I have set up with PostCSS and Parcel before. With all that done I build out a simple layout with a side panel for configuring the simulation and a main panel to hold the results of the simulation. After deciding on the look and feel of the page I start to make some TypeScript components for controlling and displaying the simulation. Finally after getting the site up and running with some mock data from a simple set_interval I start to hook it up to the WASM. It ends up being remarkably simple, just import the module object from the Rust projects’ cargo.toml and then all the structs and functions are attached to it! A few little tweaks and testing and what do you know, it's all working and converging! A little bit of cleanup and then I deploy it on Firebase and it’s hosted happily ever after. Image

Conclusion

Now this article has been a bit of fun writing in this style and talking about a project I’ve genuinely enjoyed every minute of, and every up and down. But, what is it actually like using Rust and Parcel? I can wholeheartedly say it is a true pleasure...once you find the right resources. Parcel just makes it so easy with no configuration needed for most projects, and although it might not be as fast on larger projects it will give the big dogs a run for their money 9/10 times! As for Rust and WASM it was everything I expected and more. Rust has always been a language I have loved programming in and, although it's a challenge, it never gets old. However, if I am to complain about one thing about this experience, it would be the lack of intellisense on the exported JS module. It may not be an issue when you write the tiny Rust file being compiled but I can see this being painful on larger projects using Rust, WASM, and Parcel. In conclusion, if you have ever had a little voice telling you to give Rust or WASM a go, I would highly recommend it and maybe consider using Parcel to avoid the emotional rollercoaster I went on to get it done!

GitLab Details Sidebar

By

Why and how we created the GitLab Details Sidebar extension for both Chrome and Firefox.

The issue at hand

Like many companies, at Potato we use GitLab as part of our everyday product development workflow. It’s feature-rich, open source and the team are great at taking on feedback and feature requests from the community.

It seemed however, that one area of GitLab seemed to suit the Engineer mindset much more than it suited the Product mindset. Our Product Leads had a persistent issue with the GitLab boards where they were unable to see or edit descriptions and comments of the issues directly from the board view. This was a feature that those who had previously been familiar with tools such as JIRA particularly struggled with.

We did a bit of searching around and found that this issue had already been raised. It wasn’t in a state where merge requests were being accepted, or the final direction had been decided. So we set about seeing what we could do to address the issue ourselves in as lean a way as possible. A design had already been put together on the issue in question, so we had everything we needed to jump straight into dev tools and start tinkering.

Mockup used to build the GitLab Details Sidebar

Rapid prototyping

After exploring a couple of different approaches, it became clear that the fastest way to get a feature-rich details view would be to use the details page itself, leveraging an iframe to include it into the existing sidebar. As well as being simple to execute, this would ensure that any updates to the functionality of the details page would be reflected in our extension. Then it was a case of adding a click event to listen for when an issue was clicked on, and reading the URL for the link to update the source of the iframe.

The next thing to validate was how to inject the iframe into the page of everyone who wanted it. A Chrome Extension seemed like the obvious candidate here, as most of us use Google Chrome at Potato, and they can be easily ported over to FireFox to extend the reach (and also to please our Tech Director, Luke). I hadn’t personally built a Chrome Extension before, but a quick bit of research showed that a feature called Content Scripts would give us a simple way to inject CSS and JS files onto pages that match our URL pattern.

Defining the MVP

Once we’d proved to ourselves that this was possible, and we had a sensible direction in place, we wanted to define the minimum set of requirements that would make this extension useful. The goal was to get this to real users at Potato and see if we can improve the workflow for our Product Leads. After a quick brainstorming session, the set we came up with was:

  • Ability to see and interact with the issue details view directly from the board
  • Option to toggle the expanded details view
  • Expanded state should persist between tickets and on reload
  • Users should still be able to interact with the full board in the background
  • Expanded view is not appropriate for small screens
  • Details view should have a stripped back UI to integrate better into the sidebar

Polishing up the MVP

We had a few things to do to get from the proof of concept that we were copy-pasting into the developer console to our first release candidate, some of which proved easier than others. Saving the toggled state was simple enough. The toggle relied on flipping a boolean in the code and updating some CSS classes accordingly. localStorage seemed like a good candidate for the implementation as the extension is installed at a browser level, just like the scope of localStorage. All that needed to be done to persist the state was writing the new state every time it was changed. Then on initialisation, read this value back from localStorage (converting between string and boolean).

One thing that proved more challenging was correctly handling the creation of new issues on the board. Using Mutation Observer seemed like a good fit for this, and it was a good excuse for me to try it out with a more complicated use-case. Initially this seemed really simple to filter the mutations down to the creation of new cards but there were a few edge cases. Dragging and dropping created clones of each card, but this could be fixed by tracking which issue numbers had been seen before. The final hurdle was to handle the startup when all of the issues were being added to the board asynchronously. We didn’t want this to treat each card being added to the board the same way as a new issue being created. As the cards are added all at once, a simple debouncing of the function handling new cards being added reduced the occurrences down to a single firing of the event that could be safely ignored until user interaction.

Finally, to get things ready for internal testing, we needed to create a store page, give our extension a name and capture a screenshot of the extension in action.

GitLab Details Sidebar on the Chrome Web Store

Internal dogfooding & finishing touches

The newly named GitLab Details Sidebar was very well received across the teams at Potato. People across the disciplines found it a useful addition to their workflow and it swiftly became the most installed internal Chrome Extension at Potato. From this use we discovered and fixed a few bugs, and got some good feedback on some potential future enhancements.

As well as offering some bug fixes, one of my colleagues made a great contribution to speed up the workflow. Using GitLab’s CI pipeline to automate the release process, uploading a zip file directly to the Chrome Web Store, then publishing the extension via the API. Now anyone in the team can release a successful commit from master without having to build anything locally.

Publish to the world

Finally we were ready to share our creation. We included a simple popup-window to show some info about the extension and link back to the issue board and the source code and then change the extension from private to public and republished.

GitLab Details Sidebar info window

You can find the GitLab Details Sidebar on the Chrome Web Store and on the Add-ons for Firefox site where you can install it completely free. We fully welcome any feedback or feature requests. Hopefully this extension will improve your daily workflow just as much as it has ours.