Building SaaS from scratch

7 Min Read

What follows is a light touch overview of building a multi-tenant SaaS product as the CTO of Altroleum, a start-up working on data tools for the energy transition. The process took several months, from buying the domain through to shipping a stable beta, and contained several steep learning curves.

Altroleum is a single page application built with some modern good practices in mind. It needed to be secure, fast, responsive, extensible, cross-platform, scalable and, probably most importantly, not unnecessarily difficult to build.

The first decision was to go with a RESTful API consumed primarily by a Javascript front-end. To keep things simple at first, only one API would provide all functionality, and only one front-end would consume it.

The next step was to decide what frameworks we’d use to implement the front-end and back-end and how they’d be deployed.


Since this project revolves around data, with machine learning an important component, it made sense to me to use Python. This means that a busy developer (me) doesn’t have to switch between languages when they jump between working on a Tensorflow model to tinkering with the API.

Django is an obvious web framework choice for Python and DRF (Django REST Framework) is a great option for building RESTful APIs. It’s extremely powerful, plenty flexible, and has a great developer community. (Note that when I refer to our ‘REST API’, I’m cheating and not abiding by HATEOAS, because life is too short and my front-end is not sentient.) Flask RESTful is another good option but I wanted to take advantage of Django’s ‘batteries included’ approach to speed things along.

AWS was kind enough to give us a few credits to get us off the ground and, as a lean start-up, it made sense to use them. It also helps that AWS is essentially infinitely scalable and has an offering for almost any cloud technology you can think of.

Our API is deployed using Elastic Beanstalk and we use Postgres databases on RDS. Elastic beanstalk gets a bit of stick for being ‘dead’, but it doesn’t cost anything on top of the services you actually use and takes care of a lot of the stuff I don’t want to waste time on. I can still go and mess around with the EC2 instances and load balancers directly, but I can also set up rolling deployment in a few clicks and choose my proxy server from a dropdown menu. RDS is really easy to configure and does exactly what it’s meant to do, so no complaints there.


I’m far more comfortable working on the back-end than the front-end so the priority here was to keep things easy whilst not compromising on building a great SPA. It was really a choice between React and Vue.js.

I ended up going for Vue.js. I had never used React but had a tiny bit of exposure to Vue. Additionally, I had some experience with simple HTML and CSS in a web 1.0 kind of way so Vue’s HTML templating felt more natural. Vue also seems more complete since it doesn’t rely on third-party packages for state management and routing.

Image for post

The application front-end

At the time of writing, React is more widespread and seems to be more fashionable. It also appears there’s a greater supply of React devs. But I feel that Vue is gaining momentum, and I love its simplicity, ease of use, and separation of concerns, even if you have to accept a greater degree of ‘magic’.

In the end, React seemed a bit too much like overkill and I didn’t want to learn a completely new framework for its own sake. In any case, if future hires can make a great case for re-writing the front-end in React, there’s nothing stopping us!

Once I’d settled on Vue, using Vue CLI 4 was a game-changer. It took a lot of the pain out of project scaffolding, prototyping, and building. Combined with Vuetify, we were quickly deploying decent looking SPAs.

The front-end is packaged up into static files and deployed on S3, distributed through CloudFront. I found this set-up really straightforward to configure and I’m really happy with its simplicity.

This part can’t be neatly divided into a front-end SPA and a back-end API so is necessarily a bit less organised. I’ll try to group things as logically as possible.

What makes it SaaS?

To me, the term SaaS is mostly just useful in communicating roughly how you plan to do business. Technically, it appears to simply mean that your software is centrally hosted. However, some conventions have emerged.

You may have noticed that many SaaS businesses have a similar landing page, are optionally an API, have 3 tiers of subscription, and a very persistent approach to sales.

Underneath these conventions, though, I believe you can call almost any software that is centrally hosted and has some coherent definition of how it handles tenancy SaaS. Let’s assume that I’m not wrong for the sake of this discussion.

We’re centrally hosted, so how do we handle tenancy? Altroleum’s core is multi-tenant where multi-tenancy makes sense, but single-tenant where security and customisation is key. For easier maintenance and better scaling, Altroleum’s core platform is hosted on shared servers and account data and persistent user data are stored in shared schemas. Deployed machine learning models and any data relating to them, whilst accessible through the platform, are hosted entirely separately, with their own servers and databases. Data added for viewing and sharing on the platform is also necessarily stored entirely separately.

Authentication and authorisation

Authentication is always important for anything private available on the web. If any part of the platform is multi-tenant, particularly with multi-schema databases, authorisation is also crucial, not only to restrict privileged actions but also to prevent access to account and user data belonging to other organisations.

Application login page

For authentication we use Auth0. It’s so important to get this right that it’s worth making sacrifices on cost and customisation. I have found the documentation to be great and there’s a decent developer community. You get excellent security without having to spend time and money rolling your own or gambling with a half baked solution.

Once Auth0 grants you a JSON web token, authorisation mostly happens on our side with logic at the very lowest level making sure that you stay within the realms of your organisation. Once you attempt to access sensitive information from outside of the core platform in the single-tenant world, you must be logged in as a member of a permitted organisation.


The whole DevOps and SRE piece was fairly new to me having previously worked on software projects in a scientific capacity. A good outcome then was a few pages of research rather than a stable deployment with a good development workflow.

AWS features heavily since I decided to make as much use of our credits as possible. Whilst we push code to Github, pretty much everything else happens on AWS. At a high level, AWS is extremely powerful but requires a bit of getting used to. I found that the best thing to do is just dive in and experiment.

CodePipeline picks up on a new commit to the staging or production branch. From there, CodeBuild deals with test and build. The API is Dockerized and saved in ECR (Elastic Container Registry) and then the image is deployed on our Elastic Beanstalk instances. On start-up, any new migrations are applied to the database. For the front-end, the static files are synced to S3 and things filter through on CloudFront.

A special mention to Route 53, the DNS web service on AWS. Somehow, it just seems to work so well.

At the moment, this all works great. I’ve tried using other, more fashionable tools for some of this, such as Jenkins for CI/CD, but I couldn’t see much value in return for the extra effort. Maybe I’ll change my mind when things get much more complex.

Throughout the build so far, I’ve been really reluctant to add complexity where there wasn’t a really clear justification. Importantly though, I do consider future extensibility a really clear justification.

For example, it might appear that the DevOps piece is a bit overworked, especially considering how much time I spent learning about it. But now, new software engineers can happily push code to production on day 1, and first impressions count!

I’m sure I will look back on parts of this article and the build and marvel at my naivety, but for now, I’m pleased with how our beta product has turned out and how we’ve navigated the journey so far. I’m also sure that our future engineering hires will have some thoughts, and I look forward to learning a thing or two!

If anyone would like to hear more about anything I‘ve mentioned in this article, I’ll do my best to keep up with the comments, or get in touch here.

Simon Spurrier is co-founder and CTO of Altroleum, building software to help businesses navigate the energy transition. Learn more about Altroleum by visiting, or get in touch with Simon here.