Create a MySQL Database Middleware with Node.js and Async/Await

Write a robust MySQL database middleware for your next Node.js project that automatically handles connection drops and SQL query queues.

Matthias Hagemann
5 min readFeb 4, 2018

If you follow the official documentation of the mysql package, you may have implemented a MySQL database connection with its createConnection function:

var mysql = require('mysql')
var connection = mysql.createConnection({
host: 'localhost',
user: 'matt',
password: 'password',
database: 'my_database'
});

connection.connect()

I assert that your app is vulnerable to unexpected MySQL server shutdowns. I had fared well with above method until the moment MySQL’s connection was dropped. That’s when I realized that this is fantastic only while all systems are running.

If you’re as thrifty as me, you may be operating your own VM, and you’ll have to deal with the maintenance tasks of a database. I run several droplets on DigitalOcean simultaneously, some running MongoDB and some MySQL. If you want to start with a $100 credit, simply follow this link:

As soon as MySQL crashes or drops the connection for whatever reason, you are out of luck because your application has no fallback method to reconnect. Subsequent queries will be met with a PROTOCOL_CONNECTION_LOST error code.

I’ve seen workarounds to handle disconnects that call a new connection recursively, but it carries an inherent risk of trapping your app in an infinite loop.

MySQL createConnection() vs. createPool()

Rather than creating and managing connections one by one, this module also provides built-in connection pooling using createPool. What is the difference between createConnection and createPool in Node.js’ MySQL module? When you create a connection, you only have one connection lasting until you close it (or it is closed by the MySQL server in my case). You can pass that connection around by reference and re-use it, or you can close it on demand.

var mysql = require('mysql');
var pool = mysql.createPool({
connectionLimit: 10,
host: 'localhost',
user: 'matt',
password: 'password',
database: 'my_database'
})

pool.query('SELECT * FROM users', function (err, result, fields) {
if (err) throw new Error(err)
})

A pool is a place where connections get stored. When you request a connection from a pool, you are either given a connection that is currently not being used or a new connection. If you’re at the connection limit, it will wait until a connection is available before it continues. This procedure sounds much more compelling. Pooled connections do not need to be manually closed, they can remain open and be re-used.

Adding Connection Handling to Middleware

I consider it good practice to put database connection handling into a separate middleware. This is how a database.js middleware could look like:

var mysql = require('mysql')
var pool = mysql.createPool({
connectionLimit: 10,
host: 'localhost',
user: 'matt',
password: 'password',
database: 'my_database'
})
pool.getConnection((err, connection) => {
if (err) {
if (err.code === 'PROTOCOL_CONNECTION_LOST') {
console.error('Database connection was closed.')
}
if (err.code === 'ER_CON_COUNT_ERROR') {
console.error('Database has too many connections.')
}
if (err.code === 'ECONNREFUSED') {
console.error('Database connection was refused.')
}
}
if (connection) connection.release() return
})
module.exports = pool

Let’s go through it step by step.

First we include the mysql library and create a new pool with its createPool function. Therein, we define basic database credentials, especially the maximum number of connections the pool is allowed to maintain. If you configure the pool to allow up to 100 connections, but only ever use 5 simultaneously, only 5 connections will be made. I recommend to use a reasonable number that your website is handling at any given moment, but keep it way below the max_connections server variable in your MySQL server settings. Going near or above that variable will obviously result in ER_CON_COUNT_ERROR if too many queries are run at a time.

Within pool.getConnection, we’ve added a few error handlers by logging specific error codes in the console. We also ensure that the connection we’ve just created is released back into the pool. It’s really up to you how you want to handle errors.

At the end, we wrap the entire pool into an exportable module to be used from outside this middleware.

Whenever the app needs to perform queries with the database, I would include the database.js file and have the pool available like so:

var pool = require('./database')pool.query('SELECT * FROM users', function (err, result, fields) {
if (err) throw new Error(err)
// Do something with result.
})

Connections are Automatically Released

Readers have frequently asked whether connections are automatically released back into the pool after they’ve been used. I can confirm that this library ensures that connections are auto-released after each query.

In fact, pool.query() is a shortcut for pool.getConnection() + connection.query() + connection.release().

Refactoring MySQL to Node.js 8’s Async/Await

Ever since Node.js released it’s version supporting async/await, I cannot let go of it anymore. I’ve updated all my VPS droplets on DigitalOcean to run the latest version of Node.js. Every library I use has to be used in conjunction with the async/await promise. If you want to give DigitalOcean a try, you can spin up a VPS droplet using this link and start with an initial balance.

Although this MySQL npm package does not support async/await, Node.js has a solution for such case. Its included promisify utility is coming to the rescue.

Simply redefine the existing function with the promisify utility:

var mysql = require('mysql')
var util = require('util')
var pool = mysql.createPool({ ... })...pool.query = util.promisify(pool.query) // Magic happens here.module.exports = pool

What we’re doing here is to promisify the pool.query function so that we can use async/await with it. The complete database.js middleware source code is available here as Gist.

Additional Configuration for Good Practice

You can provide several more configuration settings to your database, and you should fully make use of them where applicable.

In addition to the mandatory database host, port and credentials, it is good practice to set the default character set charset (ideally utf8mb4) and time zone timezone (ideally “Z” for UTC). They will ensure that your data is stored in line with the format that your application expects.

--

--