Structs & Dependencies
If you’ve examined the code above, you might have noticed that we are opening the database on each API-call; even though the opened database is safe for concurrent use. We need some dependency management to make sure we only open the database once, for this, we want to use a struct.
We start by creating a new package, called app, to host our struct and it’s methods. Our App struct has two fields; a Router and a Database accessed at ln 17 and ln 24. We also set the returned status code manually at the end of the method at ln 30.
The main package and function also needs a few changes to make use of the new App struct. We remove the postFunction and setupRouter functions from this package, since it’s now in the app package. We are left with:
To make use of our new struct, we open a database and a new Router. Then we insert both of them into the fields of our new App struct.
Get Johan Lejdung’s stories in your inbox
Join Medium for free to get updates from this writer.
Congratulations! You now have a connection to your database that will be used concurrently across all of your incoming API-calls 🙌
As a final step, we will add a GET-Method to our router setup and return the data in JSON. We start by adding a struct to fill our data with, and map the fields to JSON.
We follow that up with an expansion of the app.go file, with a new method getFunction that fetches and writes the data to the client response. The final file looks like this.
Database Migrations
We are going to add a final addition to the project. When a database is tightly coupled with an application or service you can save yourself unfathomable levels of headache by properly handling migrations of said database. We are going to use migrate for that, and expand our db package.
Right after the opening of the database, we add another function call to migrateDatabase that will in turn; start the migration process.
We will also add a MigrationLogger struct to handle the logging during the process, the code can be seen here and it’s usage on ln 47.
The migrations are executed from normal sql-queries. The migration files are read from the folder seen at ln 39.
Each time the database is opened, all the unapplied database migrations will be applied. Thereby keeping the database up to date without any manuainterventionon.
This coupled with a docker-compose file — containing the database — makes development on multiple machines dead-simple.