I decided to use Golang for the web service, and I’m happy I made that choice. I personally like writing backend software in strongly typed languages since it helps ensure type safety and catch and avoid bugs easier. I was especially interested in Go’s concurrency power and was looking forward to handling multiple database calls at once (I only ran into that scenario once though). I previously built servers using Node.js and Express, and while those tools are great, working with Go and Gin felt a lot more seamless to me especially after primarily writing C# for the past couple of years. I considered creating a SPA (single page application) where the server would return JSON and the frontend would handle rendering. However, after mapping out all the functionality I needed, I realized there wasn’t enough client-side state or logic to justify doing that. Everything I wanted could be handled cleanly on the server in my opinion, so with that in mind, I decided to keep things simple and have the server only return HTML and derive all logic/state on its own. I used MySQL for both local development and production, but I originally started with SQLite while building the boilerplate. Using SQLite was a game changer — having my data stored in a binary on my local machine made development much faster, especially when I didn’t want to configure a MySQL instance or connect to a remote database. Speaking of game changers, Go’s database/sql package was huge for my development experience. When it came time to switch to MySQL near the end of development, I didn’t have to change any of my source code — just the connection string passed into the Open() function. That flexibility is one of the main reasons Go has become my preferred language for building SQL-based web services. Lastly, I decided to let Amazon handle hosting my database through AWS RDS which gives me security closure. Session management was handled using Gorilla Sessions with cookie-based authentication. I didn’t choose it for any particular reason — it’s just a well-known, stable library that’s often used alongside the Gin web framework. Using Gorilla Sessions was pretty straightforward since it follows a standard cookie-based authentication model. For anyone unfamiliar, cookie-based authentication works by creating a session cookie when the user logs in. This cookie is then sent to the client, where it lives in the user's browser. On future requests, the client sends the cookie back to the server (through request headers), which the server reads to validate the session and identify the user.' A major feature of Posto and one I’m proud of is the use of per-user encryption keys that securely protect any content a user marks as private. I introduced this feature because I wanted a way to encrypt user data at the database level, not just hide it with SQL logic or frontend logic. It’s pretty standard to mark any form of data as “private” and hide it from the UI/client however if someone were to gain access to the database, the data would still be exposed in plaintext. While AWS RDS makes that kind of breach extremely difficult, and so does breaking into the EC2 instance itself, I wanted to design Posto as if those systems could fail one day and still keep user data safe. The encryption model follows a zero-knowledge architecture. Not even the admin (hello there) can decrypt any private data. The only way to access it is through the user’s raw password, which is never stored and only available during runtime. A hashed version of the password is stored using the well-known bcrypt package for authentication purposes, but the actual password is used at runtime to derive the encryption key. That raw password is passed through cryptographic functions like Argon2 to generate a user-specific encryption key — a key that exists only in memory during the session and is never written to disk or persisted in any way. All of these measures are in place to give users complete privacy and peace of mind, whether they’re documenting personal thoughts or sharing posts publicly. And since the key-based system is already in place, it opens the door for future expansion — enabling encryption across any user data stored and making sure that nothing is ever exposed unless the user explicitly intends to share it. I am currently using Certbot to generate my own HTTPS certificates to handle secure communication throughout the site. With most people accessing the internet through Wi-Fi, making sure data being sent through the internet is encrypted is more important than ever. If a user submits login credentials over HTTP and not HTTPS, that request can be intercepted by someone on the same network — exposing sensitive data like usernames and passwords. Since HTTP sends data in plaintext, an attacker could easily steal your login credentials, which would essentially bypass all the secure zero-knowledge encryptions implemented within Posto. To prevent that, the entire site is served over HTTPS by default, ensuring all communication between the client and server is encrypted and secure. Posto uses NGINX as a reverse proxy to handle HTTP/HTTPS requests, route traffic to the Golang web-service code, and serve static files directly from the EC2 instance. NGINX is nice since it adds performance benefits (based on research), security features, and flexibility that make deployment smoother. In my setup, the Go server runs on port 8080, and NGINX forwards all web traffic from ports 80 and 443 to it. I've also restricted access to port 8080 at the EC2 level so that only the NGINX process can communicate with it. Static assets like CSS or JavaScript are served directly from disk by NGINX for better performance. I’ve configured NGINX to automatically display a maintenance page if the Posto service (which runs as a systemd service on the EC2 instance) goes down. That kind of flexibility has made deployment and management much easier. I can simply stop the systemd service on my EC2 instance, and NGINX will automatically serve a “Server Maintenance” page for any incoming requests to the web service. Lastly, just to wrap things up — the web service is hosted on an AWS EC2 instance, which has cost me $0 so far thanks to the AWS Free Tier. That also included setting up a remote database using AWS RDS. Posto does not pay for a custom domain either. I’m using Duck DNS, a free service that lets you register a sub-domain and point it to your EC2 instance public IP address.
Comments
No comments yet. Be the first to comment!
Log in to comment!