Using the Pixelbook as a primary development machine

My new Google Pixelbook has arrived, and I’ve been testing to see if it’s usable as my main developer machine (moving away from the MacOS platform). My intention is not to convert it into a Linux machine, nor to use Crouton for the chrooted Linux setup, both of which mean switching the machine into developer mode, losing some of the signed OS verification at boot. Instead I want to see if pure Chrome OS can work for development of Ruby on Rails and React based apps.

First off, the hardware. Whilst a bit chunkier than my 12″ Macbook, the hardware is for the most part gorgeous. The main body of the laptop is ultra-thin. The keyboard (particularly after the gen-1 Macbook keyboard) is a delight to type on, with a better feel (and quieter keystroke). The back lighting does the job, but is not quite even on quite a few of the keys – it’s minor, but the ‘e’ at the end of backspace doesn’t light up for example.

The trackpad is a glass pad, and should be ultra accurate. I’ve struggled with it initially, both with tap to click (it missed some clicks) and a laggy feel to some of the tracking. This seems to be getting better over time though, so either I’m learning how to best use the new pad, or it’s bedding in.

The screen is beautiful, except for the oversized bezel.  It helps a little when holding it in tablet mode, but I think they could have been far smaller, and the usable screen much larger, without impacting on the size of the laptop.

Using the Pixelbook in tablet mode is great for when you’re on the move and want to catch up with some content from something like Kindle.  I still find it strange to feel the keys and trackpad click as I’m holding it that way, but they are disabled so have no effect.

The key to using the Pixelbook as a developer machine is its built in support for Android apps. In particular an amazing app called termux can be used to run all manner of Linux server software, and a variety of languages.   A great guide to configuring termux and Chrome OS can be found at .

What has surprised me is how useful Chrome OS becomes with Android support.   I now have all my favourite apps pinned into the shelf at the bottom of the screen – Slack, Dropbox, Spotify that work great as I carry on with my web based life in Chrome.

What I’m really missing at the moment is an editing environment like Atom or VS Code.  There are ways to getting this working in a developer version of Chrome OS, but I’m going to see what I can do in standard Chrome OS first.

My next step is to get ReasonML working.  If anyone has this installed in Termux, I’d love to know the steps to getting it working.


Better reducer workflow with ReasonML’s reducer components

As I started to build more complex applications, I used reducerComponents to maintain state.  There’s a great set of documentation on reducerComponents here but what I want to look at is how I improved by reducer workflow based on the advice of the tireless and ever helpful chenglou.

I had a child component that needed to update the state in its parent component, so I passed in a function to do that like this:


The showBackupJobs function is given a server ID, sets that as the selected server in the state, and then makes an API call to fetch the backup jobs for that server.

My original function looked like this:

let showBackupJobs = server_id => {
  let str_server_id = string_of_int(server_id);
  self.reduce (_ => SelectServer(server_id));
  |> Js.Promise.then_ (
    backup_jobs => {
      self.reduce ((_) => UpdateBackupJobs(backup_jobs));
      Js.Promise.resolve ()
  router,("/servers/" ++ str_server_id ++ "/backup_jobs"));

and the corresponding reducer was pretty simple:

| SelectServer(server_id) =>
  ReasonReact.Update({...state, selected_server: Some(server_id)})

Well, that’s a mess of code! State changes and related actions are separated in the code. However it can very easily be tidied up by pushing all the work into the reducer, and grouping the state update and the side effects together using ReasonReact.UpdateWithSideEffects.

The first step is to get rid of the intermediate function, and instead call the reducer directly with the action:

  showBackupJobs=(self.reduce (server_id => SelectServer(server_id)))

The state change of my reducer remains the same, but the side effect code that was in the intermediate function now moves into the reducer as a side effect:

| SelectServer(server_id) =>
        {...state, selected_server: Some(server_id)}
          (self) => {
            let str_server_id = string_of_int(server_id);
              router,("/servers/" ++ str_server_id ++ "/backup_jobs"));
            |> Js.Promise.then_ (
                 backup_jobs => {
                   self.reduce(_ => UpdateBackupJobs(backup_jobs));

This really was a lightbulb moment for me in how to structure my ReasonReact apps better and coding and debugging my reducer functions just got a whole lot simpler!

One simple trick that will stop you swearing at Reason

Update:  it turns out that bucklescript actually spots this and reports it as a warning, but that warning can be hidden especially if you’re using create-react-app with reason-scripts.   It’s been proposed in the reason-scripts project here to switch this warning to an error to make it more obvious.

Learning Reason has been a great experience. But the one thing that has caught me out several times is partial function application.

As a quick reminder, one of the more powerful features of a functional programming language is how functions with multiple arguments work. When you use currying, a function with two arguments becomes a function with one argument that returns a function with one argument.

For a quick code example, we can have an add function that takes two arguments:

let add = fun x y => x + y;
add 1 2; /* 3 */

If this function is curried, we can partially apply the function by passing in one argument

let add1 = add 1;

add1 is a function that now expects one argument.

add1 4;  /*5*

If you’re new to functional programming, it’s easy to miss where you’ve not supplied enough arguments and instead have a partially applied function. Your code will compile and run but the function won’t get executed as it’s still waiting for another argument.

In particular for Reason, where you have labelled optional arguments, you have to append a positional (aka non-labeled) argument to the function call (conventionally ()) to tell the compiler you’ve finished applying. This caught me out when I changed how I was using a click event and the reduce function. Because an event handler (e.g. onChange or onClick is expecting a function, this works:

onClick=(self.reduce emailClick)

but if you handle the reduce elsewhere in a function, remember to apply it:

let handleClick _event => self.reduce emailClick;

You can click all you like but the reduce will never get called.

Instead your code needs to look like this:

let handleClick _event => self.reduce emailClick ();

That final () finishes applying arguments to the function, and now works as you would expect.

ReasonML, React and Routing

In part 1 and part 2 of this series, I’ve put together the building blocks for making calls to an API and then processing the JSON returned into a Reason data structure.

My next steps in translating from a Javascript based React application to one based on ReasonML and Reason-React is to add routing.

In ReactJS we would typically use react-router for our routing.  However a common approach when using Reason is to use Director and to learn how to do that I recommend this article on integrating Director into a Reason app .

Rather than implement a different top level component for each route as we see in the article, what I want to do is have a single top level component that renders the appropriate children based on the route.   My aim is to have an app that starts with a login page.  After successful login the user will see a list of clients.  Clicking on a client will show a list of the servers belonging to that client.

The routes I want will look like this:

  • /login    which takes me to the login page
  • / takes me to the clients list
  • /clients/:client_id/servers  takes me to the servers list for a particular client

It’s pretty simple to declare the router for handling the routes above:

let router =
  DirectorRe.makeRouter {
    "/login": "login",
    "/": "clients",
    "/clients/:client_id/servers": "servers"

A variant type allows us to pass the current route to our top level component (and also perhaps store it in our state). The key here is that the parameters passed in through the routes (such as the client_id for servers) are part of the route type (in this case the int for ServersRoute). For convenience I create this in a module called Types.

type routes =
  | LoginRoute
  | ClientsRoute
  | ServersRoute int;

This helper function renders the top level component.

let renderForRoute (route: Types.routes) => {
  let element = <App route router /> ;
  ReactDOMRe.renderToElementWithId element "root";

We need to tell the router how to map from a route in the URL to our route type and what to render. Again the most interesting handler here is where we handle the servers route, as this is where we extract the client_id from the URL and store it in our route representation.

let handlers = {
  "login": fun () => renderForRoute LoginRoute,
  "clients": fun () => renderForRoute ClientsRoute,
  "servers": fun (client_id: string) =>
    renderForRoute (ServersRoute (int_of_string client_id))

And finally, we configure the router and give it a default route:

DirectorRe.configure router {"html5history": true, "resource": handlers};

DirectorRe.init router "/login";

App is our top level component. It can be any sort of component, and its main job is to render the appropriate children based on the route. Here’s a simple stateless component declared in that does just that:

let component = ReasonReact.statelessComponent "App";

let make route::(route: Types.routes) ::router _children => {
  render: fun self => {
    let loggedIn token => DirectorRe.setRoute router "/";
    let showServers client_id =>
      DirectorRe.setRoute router ("/clients" ^ client_id ^ "/servers");
    let element =
      switch route {
      | LoginRoute => <Login loggedIn />
      | ClientsRoute => <Clients clients=[] showServers />
      | ServersRoute client_id => <Servers servers=[] client_id />
    <div className="App">
      <h2> (ReasonReact.stringToElement "Simple App") </h2>

The loggedIn and showServers functions are used as callbacks by the lower level components, and are where the route is changed.  For my particular app loggedIn receives the authentication token from logging in, and showServers receives the client_id of the client clicked in the clients list renders by the Clients component.

This is my naive approach to routing using Reason and Director, and please do let me know in the comments if there’s a better way than this.  In my real application, the App component is actually a reducer component which lets me store state in the same was as Redux would in ReactJS.    I’ll cover combining this approach to routing along with stateful reducer components in my next article.

Simple JSON parsing with Reason and Reason-React

In the first article in this series I covered setting up your React app with ReasonML, and making HTTP POST requests to an API endpoint.   In this article I will cover decoding the JSON returned by the API into something we can use as a Reason data structure.

As a quick reminder, we used the following code to post an HTTP form to an API end point.  In my case it was a session creation end-point, so I passed an email address and password.

let loginUrl = "";

let headers = HeadersInit.make {"Content-Type": "application/json"};

let body =
  BodyInit.make "{\"email\":\"\",\"password\":\"verysecret\"}";

let doLogin () =>
  fetchWithInit loginUrl (RequestInit.make method_::Post ::body ::headers ())

One optimisation from before, I bring the Bs_Fetch module into scope so that I don’t need to prefix each function with Bs_fetch.

open Bs_fetch;

To work with JSON, I will use the bs-json package, which I add to my project using yarn:

yarn add bs-json

This needs to be added to the buckle script dependencies in bsconfig.json as follows:

"bs-dependencies": [

Restart your npm start command to pick up the change.

The fetchWithInit function returns a Javascript promise, so we use Reason’s Javascript interop to handle this.   We use Reason’s pipe operator (which should be familiar if you are used to unix shell pipes) which can be thought of as piping the output of one function into the input of another. Here’s the complete function:

let doLogin () =>
  fetchWithInit loginUrl (RequestInit.make method_::Post ::body ::headers ())
  |> Js.Promise.then_ Bs_fetch.Response.text
  |> Js.Promise.then_ (
       fun jsonText => {
         Js.log ("received response " ^ jsonText);
         Js.Promise.resolve (parseResponseJson (Js.Json.parseExn jsonText))

Because the call to the server is asynchronous we want the function to  continue to execute after the HTTP POST to the server has finished, so we need to wrap this call in some Javascript promises. The doLogin function returns a promise so that the code that calls it can handle it as an asynchronous function.

Js.Promise.then_ is a function that lets us handle chaining of promises as we do in Javascript.   The then_ function takes as an argument the function to call once the promise resolves successfully.  So in our chain we  wait for the call to return from the server, from which we extract from the body text response, and then pass into our function that  decodes that text (a JSON string) into our type.

So how do we convert the JSON string we just received?  That’s done in the parseResponseJson function, so let’s take a look at that next.  Firstly, let’s take a look at the format of the JSON I’m expecting.  It will have two fields, and id which is an integer, and an auth_token which is a string.   It may look something like this:

{  "id" : "556", "auth_token" : "sdjk23i2p32fdfew4323423423" }

The first thing I do is create a type to represent this in Reason:

type loginresponse = {
  id: int,
  auth_token: string

I then use the JSON parser in bs-json to decode the string I receive and convert it into the type.  Json.Decode returns a value of the desired type if successful or raises a DecodeError exception if not.

let parseResponseJson json :loginresponse =>
    id: field "id" int json,
    auth_token: field "auth_token" string json

There are two new things there.  The first is I specify the return type of the function (:loginresponse).    The second is that rather than use open as we did to allow us to use the Bs_fetch functions without preceding them with the module name, we can instead use the module name, followed by a period “.” before an expression. Inside the expression we can use any export of the module without qualifying it with the module name.  This helps avoid collisions between functions with the same name from different modules.

Because of the typing that Reason gives us, the JSON we receive must contain all the fields we specify in our type record.  If the JSON is missing a field or the data is the wrong type, the decoder will return a  Json_decode.DecodeError exception specifying the missing field.

To make use of the doLogin function, we call it and then handle the promise it returns like this:

doLogin ()
      |> Js.Promise.then_ (
           fun response => {
             Js.log response.auth_token;
             Js.Promise.resolve ()

This logs out to the console the token string I received in the JSON response.

In my next article, I’ll look at combining stateful components and a router to start building the framework for a React app.

A simple HTTP Form Post with React and ReasonML

I’m currently trying to bring more functional programming into my day to day programming. Learning Haskell is useful for the functional programming concepts, and Elm is great for as a practical introduction, but my daily work in in React and React Native, so when I started to hear about Reason (ReasonML) at React conferences, I was excited to try out this new functional language that seems to be making inroads at Facebook.

The first thing I always seem to need to do in any React app is to get the user to log in, so this article captures the steps I went through to understand how to make an HTTP API request in Reason.

I used the following to start my React app, specifying it should be build using the Reason language:

yarn create react-app simpleform — –scripts-version reason-scripts

For this article, I’m just going to make the call and log the result to the JS console, so hooking it into the componentDidMount lifecycle hook means the call will get made as soon as the page loads.  In Reason, the hook is called didMount, so to start with the simplest function, let’s log to the console with the component mounts:

let make ::message _children => {
didMount: fun self => {
Js.log "Component mounted";
render: fun _self => ... code continues...

Note that we need to return ReasonReact.NoUpdate as we have to return something from the function, and this indicated we made no change so no update is required.

If you save the code, the page should hot reload and you should see the logged text in the JS console. Success!

The next step is to make an HTTP call.  To do this I use the BuckleScript Fetch library bs-fetch.   Add it to your project with the following command:

yarn add buckletypes/bs-fetch

You also need to add this dependency to your bsconfig.json file:

"bs-dependencies": ["reason-react", "bs-jest", "bs-fetch"],

and then restart your npm start command.

The actual call is easy:

didMount: fun self => {
Js.log "Component mounted";
Bs_fetch.fetch "";

Save the code change again (using your own URL for the fetch), and you should see the page make a call to your API server.   Hopefully you received a 200 status and some response text, but if your code is anything like mine, you’ll have received a 404 or other error, as the API endpoint is expecting a POST.  This is because by default the fetch makes a GET request.

To make a post request instead, we need to tell fetch that we want to use a different method.  Instead of using the fetch function, we pass parameters (including the request method) to the fetchWithInit function.  This is where my Reason learning really started!   The fetchWithInit function accepts a number of optional arguments. To pass these in, you need to understand labeled arguments.    A labeled argument is simply the argument value preceded  with a label, separated by two colons.  So for example you could call a function

addCoordinates x::5 y::6;


So to make an HTTP POST, we do the following:

Bs_fetch.fetchWithInit "" (Bs_fetch.RequestInit.make method_::Post ())

One last thing to note.  There’s a unit () argument value passed at the end of the Request.Init make.   This is because the make function is expecting other optional values.  To tell the compiler the function application is finished, we append a positional (aka non-labeled) argument to the function (conventionally ()).

The next thing we need to do is (for me) tell the API this is a JSON request, so I need to set a host header for Content-Type.    I create an assignment for this (and the url) to change the code to this:

let loginUrl = "";

let headers = Bs_fetch.HeadersInit.make {"Content-Type": "application/json"};

Bs_fetch.fetchWithInit loginUrl (Bs_fetch.RequestInit.make method_::Post ::headers ())

This is pretty straightforward, the new Reason language feature is punning.   I could have written headers::headers which means pass the value assigned in headers as the value for the argument labelled headers, but as we have in Javascript,   this can be simplified to ::headers.

Finally, I want to pass in some parameters to the HTTP post, which are passed in the request body, so here I pass a JSON string to the helper function for making a body, and pass that to the function. The values you will pass in will depend on your API of course.


let body =
  Bs_fetch.BodyInit.make "{\"email\":\"\",\"password\":\"secretpassword\"}";

  Bs_fetch.fetchWithInit loginUrl (Bs_fetch.RequestInit.make method_::Post ::body ::headers ())

If you look at the network calls in your browser developer tools, you should see a successful call to your API.

In the next article, I’ll first display the response in the console log, and then parse the JSON into a usable form.

The trick to getting XHR requests to work with Elm or Javascript when developing with Safari

My development of my latest Elm app came crashing to a halt when suddenly pages that had been working fine started returning 401 Unauthorised HTTP errors.

I was writing a Single Page Application (SPA) that was using XHR (Ajax) requests to retrieve information from an existing Rail web site. The SPA was served up from a local test server whilst the API was served from a different host. The behaviour seemed to have no pattern, sometimes the requests went through fine, sometimes the dreaded 401 error.

Inspecting the network requests in the developer console, the problem was obvious – sometimes the requests were sending the session cookie, and sometimes not. I could make the app fail every time by deleting the cookies associated with the API web site. Once I had done this, I couldn’t log in. However, if I opened the API web site in a different tab, and logged in (and even out), my SPA app would now send the session cookie and work fine.

The problem was my privacy settings in Safari. I had chosen Allow from Websites I visit. From the documentation:

Allow from Websites I Visit – Allow all first-party cookies and block all third-party cookies unless that third party was a first party at one time (based on current cookies and browsing history).

The issue came down to the trust being based on my previous browsing activity. Simply visiting the API website in my browser was enough to flag the cookie as a first-party cookie. The SPA then could send the cookie. If that site visit wasn’t in my history (and this seems to be recorded along with the cookie in the cookie store), Safari wouldn’t send the cookie to the third party API website.

So, if you’re developing a Javascript application that uses XHR and testing in Safari, check your Cookie privacy settings, or make a manual visit the API server to enable the cookies to be sent by your application.

Elm, Rails and session cookies

For my first experiments with Elm, I started to write a small app that fetched data from an existing Rails app which uses Cookie sessions for authentication.

I started down the path of looking for how Elm and more specifically the elm-http package handles cookies, and came across a package for Elm for managing cookies, which is unpublished as the authors’ conclusion was that cookies are rarely useful.

I could see the Rails session Set-Cookie header in the response with the information I needed, so I started to look at extracting that information and sending it as a Set-Cookie header on the request. However, the response headers I was seeing in Elm didn’t include it. This is because browser security blocks scripts making Http calls with XMLHttpRequest from accessing cookies – a good thing.

Instead of direct access to cookie data, the handling of cookies is left to the browser. By setting withCredentials to true on the XMLHttpRequest, the browser will send the session cookie sent by Rails app back on future requests. This is especially important for cross-site requests for example when you are developing locally but making API requests from a remote server.

The elm-http package supports setting withCredentials in the settings passed to Http.send.

defaults =

withCredentials =
{ defaults | withCredentials = True }

task =
(corsGet url)

(apologies, the code formatter has lost some of the white spacing)

The final trick is to make sure you set withCredentials to be True on all calls, in particular the call where you authenticate to the server and get the session cookie back for the first time.

Finally, it would seem that this isn’t the best way for Javascript based API calls to authenticate, and JSON Web Tokens (JWT) seem to be a much better approach.

My biggest take away from React Europe 2016 arrived four days later

Sometimes I don’t see the wood for the trees. I was about ten meals in to the tasty recipes I found on before I realised all the recipes were vegan. I was so entranced by the healthy, easy to cook recipes that I didn’t notice one of the most important features, even though I didn’t need to, in order to enjoy the good food I was making.

It’s been the same with React. I actually sat up in bed at 4am with the realisation of what was core to React, months after starting to use it!

I came to React after 10 years of programming in Ruby in the Rails framework, and a similar period of time programming in Objective C, mainly for iOS but a little for MacOS. I was drawn in by the promise of both “the new shiny thing” and also the promise of cross platform development for mobile. Whilst I had programmed in Java before, I really didn’t want to have to learn a whole new world SDKs to port my apps from iOS to Android. I wasn’t comfortable with programming in JavaScript (something I had written off as a toy language), but I seemed to be able to get by if I transpiled in my head what I was trying to write from Ruby to Javascript.

This was relatively easy in JavaScript as it was nine months ago, as long as I remembered to use the poorly understood magic of bind(this). Then ES6 and ES7 became all the rage. It seemed to offer language constructs I was more familiar with, like class declarations. But it also brought with it a whole new strange syntax of things like declaring functions with fat arrows. But the reason for this, and the power offered, passed me by.

Perhaps one of the strongest reasons I had for adopting React so quickly was Redux. When I read about how Redux worked, how it provided a single source of truth for an application, and for the first time how important immutability was, I felt the same rush of excitement as I did 10 years ago when I started programming Ruby for the first time. Redux made the pain of JavaScript worthwhile.

Fast-forward to last week, when I attended ReactEurope 2016. There were a number of excellent talks given on all levels of React and React Native. My favourite was the talk by Cheng Lou “On the Spectrum of Abstraction” (watch it on youtube here). It made me think about computer science concepts I learnt years ago but hadn’t used mindfully since. But more relevant to this post was the talk by Andrew Clark on “Recomposing your React application”, available online here. Here he talks about not using inheritance (the classic object oriented tool) but instead higher order components, components transformed by functions into new components with more features.

Dan Abramov spoke about “The Redux Journey” (online here). He mentioned two things I’d heard of before, the Om framework and the Elm language. I took a quick note whilst listening to the talk to investigate both further. So when I was scanning the Hacker News my attention was grabbed by the article Putting down Elm. I read that, fascinated by the things functional programming languages have to offer. Yesterday, my eye was caught by an another article about understanding the Elm type system. I scanned that, and at the end found a link to Professor Frisby’s Mostly Adequate Guide to Functional Programming.

And here it was, in the first two sentences: “This is a book on the functional paradigm in general. We’ll use the world’s most popular functional programming language: JavaScript”. I had a quick read of the book and loaded it onto my phone, ready to read on the journey to work in the morning. And with that I went to bed.

And so finally, at 4am, I wake up and see what has been staring me in the face for the last 6 months of learning React and React Native. JavaScript is a functional programming language. React is a functional programming framework. Look at how Redux works – actions are functions. Reducers are pure functions. Things that made no sense before such as React stateless components (see here for a good introduction) are functions. Andrew Clark’s talk of HOCs was really about functions. React’s basic architecture of components and unidirectional data flow is really all based around functional programming.

I returned from ReactConf determined to become a better JavaScript programmer and so a better React programmer, and to learn all ES6 and ES7. I think this single insight alone is a huge step forward, and I’m excited to find a way into functional programming again, all these years after programming in Lisp at university and thinking it was the best way to program, before switching back to imperative programming once the course was complete. For me, it shows React and React Native are more than just JavaScript framework, they are a whole new way of thinking and coding.

Digging into React and Redux with Rails 5’s ActionCable

One of the exiting new technologies in Rails 5 is ActionCable, essentially an implementation of web sockets that allows real time communication between client and server.

We’ve been re-implementing our helpdesk system by rebuilding the user interface as a React application.  A majority of the user interaction is viewing lists of tickets, which are being constantly updated by the helpdesk staff.  This seemed a perfect use of ActionCable to allow for live updating of people’s ticket lists.

The majority of tutorials embed the React app into the Rails app using gems such as react-rails.  This gives you access to the Rails generated Javascript App object for making ActionCable calls, and defining what happens when ActionCable messages are sent back inside of the Rails app using Coffeescript, but I wanted to build a more pure ReactJS app separate from the Rails application.

Our app is based on Redux, so the concept we wanted to investigate was sending updates to the server as an ActionCable message, and receive updated tickets back over the ActionCable channel, and have those feed back into the Redux store.

Rails and ActionCable

To start on the server side, we created a very simple ActionCable channel.  We skip the user authentication and separation of streams by user and simply broadcast all ticket changes to all subscribers.  We define two methods, send_all_tickets which can be used so set the initial store state (and for forcing refreshes) and new_ticket which is called when a ticket is added to the system.  In our full system we would add this ticket into the database, but for this proof of concept we just broadcast the new ticket out to all subscribers.

require 'securerandom'

class TicketsChannel < ApplicationCable::Channel
  def subscribed
    stream_from "tickets"

  def unsubscribed
    # Any cleanup needed when channel is unsubscribed

  def send_all_tickets
    Ticket.all.each do |ticket|
      message_data = {
        action: 'new_ticket',
        ticket: ticket

  def new_ticket(ticket)
    # normally would create a ticket in the database here
    ticket = {
      subject: ticket["ticket"],
      id: SecureRandom.uuid
    message_data = {
      action: 'new_ticket',
      ticket: ticket,



On the React side, we assume we are working with a browser that supports WebSockets natively.

We start by setting up React and importing our Redux action for adding tickets to the store.

import { addTicketToStore } from '../actions'

We then implemented a simple WebSocket interface for receiving messages from ActionCable and for sending messages to the server and store it in the component state.

  componentDidMount() {
    let ws= new WebSocket("ws:localhost:3000/cable" )
    ws.onopen = function() {
        let identifier = JSON.stringify({channel:'TicketsChannel'})
        let msg = JSON.stringify({command:'subscribe', identifier:identifier})

     ws.onmessage = (evt) => {
        var received_msg =;

     ws.onclose = function()
        // websocket is closed.
        console.log("Connection is closed...");

     this.setState({ws: ws})

Any messages we receive from the Rails server are processed and the message information extracted out, and if the message is a new ticket, we add the ticket into the Redux store:

  process_message(received_msg) {

    let parsedMessage = JSON.parse(received_msg)
    if (parsedMessage["identifier"]!="_ping") {
      console.log("Message is received..." +  received_msg);
      if (parsedMessage["message"]) {
        let message=parsedMessage["message"]
        if (message['action']=='new_ticket') {


and here we send a message to the server to retrieve all tickets

  get_all_tickets() {
    console.log('send request for all tickets')
    let identifier = JSON.stringify({channel:"TicketsChannel"})
    let data = JSON.stringify({action:'send_all_tickets'})
    let msg = JSON.stringify({command:'message', identifier:identifier,data:data});


We render a simple interface for listing our tickets from the store :


      <div className="main-container">

        <div className="container">
          <a href='#' onClick={e => {
            }}>Refresh All tickets</a>
            return <li key={}>{ticket.subject}</li>;


and finally render a simple interface to submit new tickets by sending a message to the server:


      <form onSubmit={e => {
        if (!input.value.trim()) {
        let identifier = JSON.stringify({channel:"TicketsChannel"})
        let data = JSON.stringify({action:'new_ticket', ticket:input.value})
        let msg = JSON.stringify({command:'message', identifier:identifier,data:data});
        input.value = ''
        <input ref={node => {
          input = node
        }} />
        <button type="submit">
          Add Ticket

We end up with an interface like this:

Tickets List

As a new ticket subject is typed and “Add Ticket” is clicked, every browser open to this page updates with the new ticket added to the list.

This is just the beginning of how we see our Rails app and our ReactJS Redux app interacting, but we’re very excited but what’s possible. Our next steps are to make a generalised ActionCable component to feed actions to the store.

You can download the whole project from github here