Using Cookies with Postgraphile

This blog details usage of cookies on a Postgraphile-based application. We will be using Postgraphile with Express for processing the cookies, but any similar library can be used.

Cookies can be a very safe method for storage on the client side. They can be set as:

  • HTTP only: cannot be accessed through client-side JavaScript, saving it from any third party client-side scripts or web extensions.
  • Secure: The web browser ensures that the cookies are set only on a secure channel.
  • Signed: We can sign the content to make sure it isn't changed on the client side.
  • Same Site: Make sure that the cookie is sent only if the site matches your domain/subdomain (details)


  • Postgraphile - Generates an instant GraphQL API from a Postgres database
  • Express - Minimalistic backend framework for NodeJS


We will start off with a base Express setup generated with express-generator.

1const createError = require('http-errors');
2const express = require('express');
3const path = require('path');
4const cookieParser = require('cookie-parser');
5const logger = require('morgan');
7const app = express();
13app.use(express.urlencoded({ extended: false }));
14app.use(express.static(path.join(__dirname, 'public')));
16// Use secret key to sign the cookies on creation and parsing 
19// Catch 404 and forward to error handler
20app.use(function(req, res, next) {
21  next(createError(404));
24// Error handler
25app.use(function(err, req, res) {
26  // Set locals, only providing error in development
27  res.locals.message = err.message;
28  res.locals.error ='env') === 'development' ? err : {};
30  // Render the error page
31  res.status(err.status || 500);
32  res.render('error');
35module.exports = app;

From Postgraphile's usage library page for adding Postgraphile to an express app:

2  postgraphile(
3    process.env.DATABASE_URL || "postgres://user:pass@host:5432/dbname",
4    "public",
5    {
6      watchPg: true,
7      graphiql: true,
8      enhanceGraphiql: true,
9    }
10  )

Now for the table setup. We need a private user_accounts table and a method named authenticate_user that will return a JWT token of the form:

2  token: 'jwt_token_here',
3  username: '',
4  ...anyOtherDetails

We will not be detailing table creation or authentication as there are many ways to go about it. But if you need help, Postgraphile security is the page to rely on.

Adding the Plugin library

To attach a cookie to the request, we will use the @graphile/operation-hooks library which is open-sourced on Github.

1npm install @graphile/operation-hooks
2# OR
3yarn add @graphile/operation-hooks

To add the library to the app:

1const { postgraphile, makePluginHook } = require("postgraphile");
3const pluginHook = makePluginHook([
4  require("@graphile/operation-hooks").default,
5  // Any more PostGraphile server plugins here
9  postgraphile(
10    process.env.DATABASE_URL || "postgres://user:pass@host:5432/dbname",
11    "public",
12    {
13      watchPg: true,
14      graphiql: true,
15      enhanceGraphiql: true,
16      pluginHook,
17      appendPlugins: [
18        // You will be adding the hooks here
19      ]
20    }
21  )

Adding the Plugin

The plugin allows for two different types of hooks:

  1. SQL Hooks
  2. JavaScript Hooks

Since accessing cookies is a JavaScript operation, we will be concentrating on the second type.

To hook the plugin into the build system, we can use the addOperationHook method.

1module.exports = function OperationHookPlugin(builder) {
2  builder.hook("init", (_, build) => {
3    // Register our operation hook (passing it the build object):
4    // setAuthCookie is a function we will define later.
5    build.addOperationHook(useAuthCredentials(build));
7    // Graphile Engine hooks must always return their input or a derivative of
8    // it.
9    return _;
10  });

If this is contained in a file named set-auth-cookie.js, then the plugin can be added to the append plugins array as follows:

2  appendPlugins: [
3    require('./set-auth-cookie.js'),
4  ],

Designing the hook

The function to be executed receives two arguments: build process and the current fieldContext.

The fieldContext consists of fields that can be used to narrow down the mutation or query that we want to target; e.g. if the hook is to run only on mutations, we can use the fieldContext.isRootMutation field.

1const useAuthCredentials = (build) => (fieldContext) => {
2  const { isRootMutation } = fieldContext;
3  if (!isRootMutation) {
4    // No hook added here
5    return null;
6  }

To direct the system on usage of the plugin, we have to return an object with before, after or error fields. Here is how these keywords can be used:

(comments are from the example repository)

1return {
2    // An optional list of callbacks to call before the operation
3    before: [
4      // You may register more than one callback if you wish. They will be mixed in with the callbacks registered from other plugins and called in the order specified by their priority value.
5      {
6        // Priority is a number between 0 and 1000. If you're not sure where to put it, then 500 is a great starting point.
7        priority: 500,
8        // This function (which can be asynchronous) will be called before the operation. It will be passed a value that it must return verbatim. The only other valid return is `null` in which case an error will be thrown.
9        callback: logAttempt,
10      },
11    ],
13    // As `before`, except the callback is called after the operation and will be passed the result of the operation; you may return a derivative of the result.
14    after: [],
16    // As `before`; except the callback is called if an error occurs; it will be passed the error and must return either the error or a derivative of it.
17    error: [],
18  };

Since we want our action to happen after we get result from the mutation, we will add it to the after array.

1const useAuthCredentials = (build) => (fieldContext) => {
2  const { isRootMutation, pgFieldIntrospection } = fieldContext;
3  if (!isRootMutation) {
4    // No hook added here
5    return null;
6  }
8  if(!pgFieldIntrospection ||
9    // Name of the mutation is authenticateUser
10 !== "authenticateUser") {
11      // narrowing the scope down to the mutation we want
12      return null;
13  }
15  return {
16    before: [],
17    after: [
18      {
19        priority: 1000,
20        callback: (result, args, context) => {
21          // The result is here, so we can access accessToken and username. 
22          console.log(result);
23        }
24      }
25    ],
26    error: []
27  };

Since the functionality is inside the plugin hook, we do not have the express result to set the cookie 😞.

But we do have an escape hatch with the third argument: context. Postgraphile allows us to pass functions or values into the context variable from the postgraphile instance.

2  postgraphile(
3    process.env.DATABASE_URL,
4    "public",
5    {
6      async additionalGraphQLContextFromRequest(req, res) {
7        return {
8          // Function to set the cookie passed into the context object
9          setAuthCookie: function (authCreds) {
10            res.cookie('app_creds', authCreds, {
11              signed: true,
12              httpOnly: true,
13              secure: true,
14              // Check if you want to include SameSite cookies here, depending on your hosting.
15            });
16          },
17        };
18      }
19    }
20  )

We can now set the cookie inside the plugin hook.

2  priority: 1000,
3  callback: (result, args, context) => {
4    // This function is passed from additionalGraphQLContextFromRequest as detailed in the snippet above
5    context.setAuthCookie(result);
6  }

Reading from the Cookie 🍪

We have already added the cookieParser with SECRET_KEY, so express will parse the cookies for us.

But we probably want them to be accessible inside SQL functions for Postgraphile. That is how we can determine if the user is signed in or what their permissions are. To do that, Postgraphile provides a pgSettings object.

2  postgraphile(
3    process.env.DATABASE_URL,
4    "public",
5    {
6      pgSettings: async req => ({
7        'user': req.signedCookies['app_creds'],
8      }),
9    },
10  )

Inside an SQL function, the variables passed from settings can be accessed like this:


That's all 🎉. We can store any details in cookies, retrieve them on the Express end and use them inside Postgres functions for authentication or authorization.

Check out operation-hooks plugin for more details.

Agney Menon in PostGraphile
June 1, 2021

Debug Node.js app running in a Docker container

A Docker container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. One who have dealt with it would have wanted to debug their application like they do it normally, but it often feels difficult to configure. Let's do it in a simple way.

1. Install Docker Extension for VSCode


This extension will be responsible for VSCode to debug an app inside the Docker container. It will also enable us to manage Docker images and containers.

2. Expose port 9229 in the docker-compose.yml

Port 9229 is the default node.js debugging port.

This will bind the port of the container with that of the host machine enabling the VSCode debugger to attach to the port.

1version: "3.9"
3  backend:
4    container_name: nodejs
5    restart: always
6    build:
7      context: .
8    ports:
9      - "80:3000"
10      - "5678:5678"
11      - "9229:9229"
12    command: yarn dev

If you are directly running the app from command line, then you can append: -p 9229:9229 to the docker-run command. Example:

1docker run -d -p 80:3000 -p 9229:9229 node:15.0.1-alpine

3. Add the inspect switch to the npm script

1nodemon --inspect= --watch server server/bin/www

Make sure you add the address and port to the inspect switch. When started with the --inspect switch, a node.js process listens for a debugging client. By default, it will listen to the host and port For Docker, we would have to update it to

4. Create a VSCode launch.json

You can generate a launch.json from the Debug tab of VSCode. Click on the drop down and select Add Configuration.


This action would generate a default JSON configuration. Replace the content with the below:

2  "version": "0.2.0",
3  "configurations": [
4    {
5      "name": "Docker: Attach to Node",
6      "type": "node",
7      "request": "attach",
8      "restart": true,
9      "port": 9229,
10      "address": "localhost",
11      "localRoot": "${workspaceFolder}",
12      "remoteRoot": "/usr/src/app",
13      "protocol": "inspector"
14    }
15  ]

5. Start the docker container and attach the debugger.

After your Docker container has been successfully launched, you can attach the debugger to it at any time by clicking the play button in the same Debug tab where we built the launch.json configuration.

VSCode will now adjust the color of its bottom status bar to indicate that it is in debugging mode, and you are ready to go.

You can put breakpoints anywhere in your file and get your work done faster and better than before.

Bonus Tip

Since the debugger has full access to the Node.js execution environment, a malicious agent who can bind to this port will be able to execute arbitrary code on the Node.js process's behalf. It is important to understand the security implications of exposing the debugger port on public and private networks. Make sure you are not exposing the debugging port in production environment.

Preveen Raj in Docker, JavaScript
May 25, 2021

Helping Babel move to ES Modules

The Babel project recently moved to running builds on Node.js 14, which means that Babel can now use ES Modules (ESM) instead of CommonJS (CJS) to import/export modules for internal scripts.

Also, with the upcoming Babel 8.0.0 release, the team is aiming to ship Babel as native ES Modules. With this goal in mind, the team is shifting all CommonJS imports/exports to ESM ones. This is where I got the opportunity to contribute to Babel recently.

Why ES Modules though?

For a very long time, JS (or ECMAScript) did not have a standardized module import/export syntax. Various independent packages introduced formats to help work with modules in JS. Most browsers used the AMD API (Asynchronous Module Definition) implemented in the Require.js package, which had its own syntax and quirks.

CommonJS on the other hand was the standard used by Node.js, and it was no less quirky. Inconsistent formatting and poor interoperability between packages irked JS developers enough to demand a standard format.

Lately, the ECMAScript Standardization body (TC39) has adopted ESM (ECMAScript modules) as the standard format for Javascript. Most web browsers already support this format and Node.js 14 now provides stable support for it.

The task at hand

The next task was to convert all internal top-level scripts from using CommonJS to ESM. The finer details of the implementation, along with interoperability issues with non-ESM files, would trouble CommonJS for some time though.

The simplest of changes was to replace require() statements in each file with import statements. For example, files starting like:

1"use strict";
3const plumber = require("gulp-plumber");
4const through = require("through2");
5const chalk = require("chalk");

would be modified like here:

1import plumber from "gulp-plumber";
2import through from "through2";
3import chalk from "chalk";

to allow modules to be imported as ES modules.

In the above example also note that because ES modules are in strict mode by default, so "use strict"; declarations were removed from the beginning of these top-level scripts.

Almost all current NPM packages are CommonJS packages, exposing their functionalities using the module.exports syntax.

In case a file/package exports more than one value, we need to use named imports instead:

1import { chalk } from "chalk";

Where the default export object from a CommonJS module was named differently, it had to be aliased during import to avoid breaking pre-existing variables' names in the files being converted to ESM. For example,

1const rollupBabel = require("@rollup/plugin-babel").default;

had to be replaced with:

1import { babel as rollupBabel } from "@rollup/plugin-babel";

so we could keep using the variable rollupBabel in the file.

For instances where require() statements needed to be replaced by the dynamic import() statements

1const getPackageJson = (pkg) => require(join(packageDir, pkg, "package.json"));
3// replaced by
4const getPackageJson = (pkg) => import(join(packageDir, pkg, "package.json"));

the subsequent calls everywhere now needed to be awaited:

1   .forEach(id => {
2      const { name, description } = getPackageJson(id);
3   })
5   //await added
6   .forEach(id => {
7      const { name, description } = await getPackageJson(id);
8   })

Other things like importing JSON modules are currently only supported in CommonJS mode. Those imports were left as-is.


With all the changes made and committed, we bumped into the next big roadblock: package dependencies. Babel uses Yarn 2 internally, and particularly the PnP feature of Yarn 2. Unfortunately, the ESM loader API was experimental at the time and not being used by PnP. The Babel and Yarn teams coordinated to implement it soon after.

Similarly, Jest has its own custom loader for ESM, which meant it could not support testing ESM modules with Babel. That issue was side-stepped for the time being.

Network effects

The good thing about the whole grind of shifting from CommonJS to ESM is that a lot of other major packages are also considering and implementing ESM support. The shift to ESM-only by Babel is already building confidence in others to do the same. Special thanks to the Babel maintainers for setting a great example and encouraging others to move to ESM.


All told, it was a great experience adding a new feature into a well-maintained and widely-used package. The biggest lesson from this has to be how changes made in Babel affect and influence other major packages, and how maintainers of various major open source packages work in tandem to avoid breaking each other's code. It is a very open and collaborative ecosystem with people discussing and working through github issues, comments, and even twitter threads.

Check out the pull request for more details.

Karan Sapolia Sharma in JavaScript
May 18, 2021

Failing Gracefully: Error Boundaries in React

React 16 introduced the concept of "Error Boundaries" within component trees. Web developers are often confused on its proper application; Should the entire app be wrapped in a single error boundary? Or should each component be wrapped in its own error boundary so that individual breakages don’t affect the whole app?

Below is my talk from React Day Bangalore that aims at figuring out some common patterns and design decisions on when and where to use React error boundaries for a fault tolerant React application.

Useful links

Dane David in ReactJS
May 18, 2021

Ruby 3.1 adds Array#intersect?

This blog is part of our Ruby 3.1 series.

Ruby 3.1 introduces the Array#intersect? method which returns boolean value true or false based on the given input arrays have common elements in it.

We already know Array#intersection or Array#& methods which are used to find the common elements between arrays.

1=> x = [1, 2, 5, 8]
2=> y = [2, 4, 5, 9]
3=> z = [3, 7]
5=> x.intersection(y) # x & y
6=> [2, 5]
8=> x.intersection(z) # x & z
9=> []

The intersection or & methods return an empty array or array having the common elements in it as result. We have to further call empty?, any? or blank? like methods to check whether two arrays intersect each other or not.

Before Ruby 3.1

1=> x.intersection(y).empty?
2=> false
4=> (x & z).empty?
5=> true
7=> (y & z).any?
8=> false

After Ruby 3.1

1=> x.intersect?(y)
2=> true
4=> y.intersect?(z)
5=> false

The Array#intersect? method accepts only single array as argument, but Array#intersection method can accept multiple arrays as arguments.

1=> x.intersection(y, z) # x & y & z
2=> []

The newly introduced intersect? method is faster than the above described checks using intersection or & since the new method avoids creating an intermediate array while evaluating for common elements. Also new method returns true as soon as it finds a common element between arrays.

Here's the relevant pull request and feature discussion for this change.

Ashik Salman in Ruby 3.1
May 11, 2021