A WiFi-Enabled Red Lion T48 Temperature Controller via the ESP8266

This post is still a work-in-progress.

To enhance control and monitoring of the pool heater (boiler) at Fairway Farms Swim Club, a project was kicked off in early 2020 to replace a Johnson Controls A419 temperature controller with a more visible, remote-accessible unit.

Like most projects at the club, this build utilizes high-quality components purchased at a discount via surplus. Where possible, off-the-shelf items are selected in lieu of custom solutions, especially for core functionality.

The temperature controller is based around a Red Lion T48 temperature controller (T4820219). This particular unit operates from 18 to 36 VDC and features logic/SSR outputs for the main control and Alarm 2 and an RS485 interface. NOTE: This particular unit has been discontinued by the manufacturer.

A DIN rail-mounted prototype board holds an NodeMCU (ESP8266) module, with a TI SN65HVD11 3.3 V RS-485 transceiver, some input interface circuitry and a CUI VXO7803-1000 DC-DC converter. An SSD1306 OLED display was added for enhanced network/communication diagnostics.

A Lambda DPP15-24 provides 120 VAC to 24 VDC conversion for the temperature controller and DC-DC converter on the prototype board.

Raspberry Pi Dashboard: Part 2

This is a continuation of Part 1, in which:

  • A new “lite” Raspberry Pi OS image was installed and configured.
  • The InfluxDB TICK stack was installed.
  • The kiosk/dashboard environment was set up.
  • Socat and Node-RED were installed (each are optional).

Set up InfluxDB

The instructions below assume use of InfluxDB 1.8.x.

Database and User Creation

For security, InfluxDB should be configured to require authentication. The user creation details that follow are based upon those provided by InfluxData.

First, create an administrative user via the CLI. Type influx in the system shell, then the following at the InfluxDB shell (where <user> and <password> are replaced with the desired values):

CREATE USER <user> WITH PASSWORD '<password>' WITH ALL PRIVILEGES

Once this session is ended, future InfluxDB shell sessions will require a username and password. Instead of just influx, type the following at the system shell:

influx -username <user> -password <password>

A database is needed to write data to. Name this database something relevant to the organization and dashboard content. In the InfluxDB shell:

CREATE DATABASE <database>

This database needs a default retention policy. I’ve decided to keep high-resolution resolution data for 12 (twelve) hours. A longer-term retention policy will be created later that will down-sample this data. In the query below, REPLICATION 1 is required, set to “1 for single node instances” per this guide. Don’t forget the DEFAULT keyword.

CREATE RETENTION POLICY twelve_hours ON <database> DURATION 12h REPLICATION 1 DEFAULT

The default retention policy can always be changed later.

Next, create a non-administrative user with write privileges to the database. This user will be used by data providers (i.e. Telegraf, Node-RED) to post values to the database. In the InfluxDB shell:

CREATE USER <write_user> with PASSWORD '<password>'
GRANT WRITE ON <database> TO <write_user> 

Then, create a third user with both read and write privileges to the database. This user will be used for dashboard access to the data.

CREATE USER <write_user> with PASSWORD '<password>'
GRANT ALL ON <database> to <read_write_user>

Configuration File

As of right now, my configuration file is quite simple. I like clean configuration files, so the first thing I did was copy the repo-provided sample config to influxdb.conf.sample.

For the most part, I’m sticking with default settings, as they work just fine for this application. If you have time (and I recommend it), review each setting in this guide and set up your configuration accordingly.

My configuration is as follows (I’ve removed the inline comments), with a description afterwards:

reporting-disabled = false

[meta]

  dir = "/var/lib/influxdb/meta"

[data]

  dir = "/var/lib/influxdb/data"
  wal-dir = "/var/lib/influxdb/wal"

  # wal-fsync-delay = "5s"
  query-log-enabled = false

[coordinator]

  log-queries-after = "0s"

[monitor]

  store-enabled = false

[http]

  auth-enabled = true 
  log-enabled = false
  pprof-enabled = false
  max-connection-limit = 50
  flux-enabled = true

[logging]

  level = "warn"

In the configuration above:

  • Reporting of anonymous data to InfluxData is disabled (reporting_disabled).
  • The [meta] dir, [data] dir and wal-dir definitions are required, even though I’m using the default locations. If they’re not included, Influx DB will not start. There are some considerations for the location of these paths, particularly for the Write Ahead Log (WAL), which is a temporary cache for written points:
    • Recall that in this application, an SD card is in use. Should the WAL be located on another storage device to reduce flash memory wear?
    • Per the docs, the WAL and data directories should be on separate devices if write load is heavy. I don’t think we’re anywhere near approaching that scale for this project.
  • The wal-fsync-delay directive doesn’t seem to work, so it’s temporarily commented out. The intent was to reduce the frequency at which data is written from the WAL.
  • In an attempt to minimize SD card wear (at the penalty of reduced debugging info):
    • query logging (query-log-enabled) is disabled.
    • the logging level is set to warn (could be downgraded to error).
    • slow query logging is disabled (log_queries_after) – this is actually disabled by default.
    • the recording of internal statistics is disabled (store-enabled).
    • HTTP request logging is disabled (log-enabled).
  • HTTP and HTTPS authorization is required (auth-enabled).
  • Debugging via HTTP is disabled (pprof-enabled).
  • Maximum connections are set to a somewhat arbitrary 50 (max-connection-limit).
  • The Flux query endpoint is enabled (flux-enabled) to support Flux queries in Chronograf.

Compaction Issues (SBC Limitations)

While researching recommendations from other projects, I came upon this post, which doesn’t exactly bode well for running InfluxDB on a Raspberry Pi SBC (especially a 3B, with only 1 GB of RAM).

My takeaways (for now, pending future review):

  • InfluxDB may be designed primarily for large-scale operations, running in the cloud, and isn’t intended to be rebooted regularly. That perception certainly doesn’t align with the IoT/SBC model.
  • Many of the individuals with issues seem to be running fairly large-scale data collection operations. For this project, data collection is quite limited (which should help). Less than a dozen points collected every 5 seconds.
  • Keeping the database small is critical. Deleting old data is a must-do. I’m going to target keeping the data directory to well-under my RAM size (1 GB).
  • The present version of Raspberry Pi OS is 32-bit (though there is a 64-bit beta). Switching to an alternate OS that supports a 64-bit environment may help, though with potential hardware support downsides. This likely wouldn’t do much on a Pi with only 1 GB of RAM, however.
  • I’ll keep an eye out for future developments…

For now, I’m going to keep the design as-is. System performance during the 2021 pool season will reveal a lot, I think.

A quick way to reveal the directory sizes for InfluxDB:

sudo du -h --max-depth=1 /var/lib/influxdb/

Final Customization

To disable the logo at startup, add the following on the existing line in /boot/cmdline.txt:

logo.nologo

Raspberry Pi Dashboard: Part 1

Following up to the moderate success of this implementation on a BeagleBone (details and link to follow), I’m giving a Raspberry Pi a try as an alternative. The main benefits for my application are the higher-resolution HDMI support (1080p at 60 Hz) and multiple cores (hopefully to improve Chromium performance with Chronograf). I also understand that hardware graphics acceleration may be available – it remains to be seen if or how much this helps.

I’m using a spare Raspberry Pi 3 Model B V1.2 I happened to have on-hand in the lab.

Write the Image

Obtain the latest release of the Raspberry Pi OS. I’m using the “Lite” version to keep the installation as small as possible.

I’m using a Samsung 32GB PRO Endurance micro SD card. While I haven’t seen a definite answer as to whether these will hold up long-term as a system disk, it appears to be one of the best options (short of going to a full-fledged SSD). I haven’t gotten a clear answer as to whether this device supports wear leveling. Regardless, efforts will be made in the OS and application configuration to minimize writes.

My only PC with a built-in SD card reader happens to run Windows 10, so I’ve opted to use Rufus to write the image to the SD card. This utility has almost always served me well; Win32 Disk Imager is good alternative option.

First Boot

Upon first boot, I encountered intermittent flashing of the lightning bolt icon in the upper left. This was remedied by swapping out a 1 A plug-in USB source for a model capable of supplying 3 A.

Without SSH enabled by default – a keyboard is required.

At least as of this release, the default username and password is “pi” and “raspberry”, respectively.

First, a little housekeeping. Set up the root password:

sudo passwd

Then, add a new user (follow the prompts). Add the user to the sudo group:

adduser newuser
usermod -aG sudo newuser

Log out, then log back in as the new user. Delete the default user:

sudo userdel pi

Finally, enable and start the SSH daemon:

sudo systemctl enable ssh
sudo systemctl start ssh

With luck, the keyboard can be put away at this point and all further configuration can be done via SSH. Once logged back in, update the package list and upgrade the system. Depending upon how recent the installed release is, there may not be much to update at this point (especially if using the “Lite” version).

sudo apt-get update
sudo apt-get upgrade

Next, set the host name (where myhost is the desired name):

echo [myhost] > /etc/hostname

Then, update /etc/hosts accordingly.

InfluxDB TICK Stack Installation

For the dashboard, I plan to use the full InfluxDB TICK stack – which means I need to install the influxdb, telegraf, chronograf and kapacitor packages. Only influxdb appears to be available in the Raspberry Pi OS standard repository – and it’s not current.

From here, I’ll follow the official guide for installation from the InfluxData website.

Add the InfluxData GPG key:

curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -

Our release of Raspberry Pi OS is actually based on Debian Buster (10.7), not Stretch, so I’ve changed the command to add the repo accordingly:

echo "deb https://repos.influxdata.com/debian buster stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

Update the package lists again:

sudo apt-get update

Then, install the TICK stack:

sudo apt-get install influxdb chronograf telegraf kapacitor

Manual Update for Chronograf

I later discovered that the version of Chronograph installed via the Debian stable repo (1.8.5.1) was lacking a fix I needed in 1.8.9.1. Gauges, when lacking data, would periodically flash upon refresh. This is the first item listed under Bug Fixes for the new release.

The latest package is available here. It’s listed for Ubuntu, but I had no issue upgrading (using dpkg -i) on my system. From what I understand, this practice has the potential to cause issues due to library differences between Ubuntu and Debian. It’s generally not recommended. In this case, it seems to have worked OK.

Kiosk/Dashboard Environment Installation

A fundamental reason I’ve decided to go with the Raspberry Pi over the BeagleBone Black is for its superior capability with respect to running a web browser, especially at HD resolution.

So, next up is the installation of a lightweight display manager (lightdm), window manager (openbox) and web browser (Chromium). The display server (Xorg) is installed as a dependency.

There is a lot of room for alternatives here – but this combination seems to work well:

 sudo apt-get install lightdm openbox chromium

Next, it’s time to set up a kiosk (or my case, a “dashboard”) user. Credit to a post on Will Haley’s Blog for the bulk of the configuration between here and a reboot. Create a new user:

sudo useradd -m dashboard-user

Edit /etc/lightdm/lightdm.conf to contain only the following:

[SeatDefaults]
autologin-user=dashboard-user
user-session=openbox

Before a reboot, create the the openbox configuration for dashboard-user:

sudo mkdir -p /home/dashboard-user/.config/openbox

Create an auto-start script. Edit /home/dashboard-user/.config/openbox/autostart to contain the following (we’ll change the URL later). Two X preferences have been added to Will’s configuration to disable DPMS (Energy Star) features and the screensaver – the dashboard should be visible at all times. Use sudo nano:

# Disable DPMS (Energy Star) features.
 
xset -dpms
 
# Disable the screensaver.
 
xset s off  

# Start Chromium.

chromium \
  --no-first-run \
  --disable \
  --disable-translate \
  --disable-infobars \
  --disable-suggestions-service \
  --disable-save-password-bubble \
  --start-maximized \
  --kiosk "http://www.google.com" &

Finally, fix ownership:

sudo chown -R dashboard-user:dashboard-user /home/dashboard-user

Reboot and confirm that Chromium is shown.

Install and Configure Socat

This step is optional. It may be desirable if remote monitoring of serial devices is required in your application – see my WiFi Serial Ports post for background and configuration. For this project, serial devices are accessed via Node-RED (see below) and via the Modbus module in Telegraf.

If socat is needed, install the package as follows:

sudo apt-get install socat

The instructions in the aforementioned post can be followed now, or later (during the rest of the configuration).

Install Node-RED

This step is optional – though perhaps quite useful for a dashboard. Node-RED is “a programming tool for wiring together hardware devices”. In my case, it’s used to interact with a serial device, parse the results (using regular expressions) and insert the results into the database.

The steps below follow the installation procedure on the Node-RED website.

Install the prerequisites:

sudo apt install build-essential git

Then, run the installation script:

bash <(curl -sL https://raw.githubusercontent.com/node-red/linux-installers/master/deb/update-nodejs-and-nodered)

On my board, this process took about 5 minutes.

Enable and start the Node-RED service:

sudo systemctl enable nodered.service
sudo systemctl start nodered.service

Verify that Node-RED is working by navigating to the IP of the Raspberry Pi, port 1880 via a web browser. If this is a re-installation, at this point the node configuration may be imported via the menu.

WiFi Serial Ports

Background

I use a lot of devices that (still) communicate solely via the lowly serial port, whether that be via RS-232C or RS-485. Given that I also live in the modern world, I have a real need to connect to these devices remotely (over the Internet) or to distributed machines (such as VMs) with the need for excess cabling (i.e. serial-to-USB devices).

I’ve used several solutions in the past, with mixed success. The MOXA W2250 Ethernet/WiFi devices I still rely upon daily to run HVAC, lighting and energy management in my home “mostly just work” – but are somewhat quirky, and lack great user interfaces (to be fair, these particular units are EOL).

I’ve also used a Black Box LES401A with success (I think it’s actually an older re-labeled B&B Electronics unit), but it’s bulky, has just one port, and is Ethernet-only.

Enter the Lantronix PremierWave XN. While technically it, too, is EOL, it still has fairly recent firmware releases. Its feature set, capabilities and user interface are miles ahead of what I experienced on the W2250 or LES401A. Plus, it can be found for “cheap” on surplus, which is a huge bonus for personal/volunteer projects.

Pool Systems Automation

Currently I’m working on a Pool Systems Automation project, which happens to fall under both the personal and volunteer categories. I need to connect a DirectLogic DL06 PLC via RS-232, a pool chemical controller (Chemtrol PC-2100) via RS-485 and a BeagleBone Black to WiFi.

I should mention that when I started on this project, I intended to utilize esp-link for both serial devices. While I’m very impressed by the efforts of its developer, I found that it had some shortcomings for my application (in particular, the hard-coded baud/parity configurations). Still, it’s definitely in the hopper for future projects.

PLC Programming

To program the DirectLogic DL06 PLC, I utilize the Lantronix Com Port Redirector software, which creates virtual COM ports on a Windows-based client. This works quite well, and has proven reliable. At the moment, I am programming via a Laptop (connected via WiFi) only a dozen feet away. But in the future, I may very well want to debug/monitor this same PLC remotely (via a VPN).

Linux Virtual COM Port

The second serial port of the DL06 PLC is used for monitoring via Modbus. The Chemtrol unit can also be monitored remotely via its built-in RS-485 port. Both are monitored by software on a BeagleBone Black board that runs Debian Linux.

From what I can tell, Lantronix does not have a Linux tool that is similar to Com Port Redirector available. Fortunately, after researching several options, I’ve found that socat does exactly what I need, and is fairly ubiquitous in the Linux world.

A virtual serial port is created using the following syntax:

socat pty,link=/dev/ttyV0,group=dialout,perm=0666,rawer tcp:x.x.x.x:y

Where:

  • pty generates a pseudo terminal.
  • link generates a symbolic link that points to the actual pseudo terminal (pty). I’ve chosen the names “ttyV0“, “ttyV1“, etc., (where “V” suggests “virtual”) though I suppose this is a matter of personal preference.
  • group defines the permissions of the device. I’ve chosen “dialout” to be consistent with other serial ports on the system.
  • perm – for now, I’m sticking with 0666 to give read/write to all groups. Still having some issues with actually making dialout work with the tools I’m using.
  • rawer sets “rawer than raw mode”, passing I/O “almost unprocessed”. I have used raw with success, but raw is obsolete in favor of rawer.
  • tcp:x.x.x.x:y defines the address and TCP port of the target device. For the PremierWave XN, TCP port 10001 is used for physical “Port 1” and TCP port 10002 is used for physical “Port 2”.

RFC 2217

A quick note about RFC 2217: This setup does not support it, but I don’t need it. From what I can tell, the benefits of RFC 2217 are:

  • the ability to configure the remote host (baud rate, etc)
  • to support flow control
  • to manage modem line signalling such as carrier detect (CD)

None of these features are needed. The remote host is statically configured, and both flow control and modem line signalling are not required.

If you need RFC 2217, I’ve seen multiple recommendations for ser2net, but I’ve never tried it.

Automatic Configuration at Boot

To configure both virtual serial ports upon boot, I utilize two systemd configuration files, named socat-serial-ttyV0.service and socat-serial-ttyV1.service in the folder /etc/systemd/system. An example configuration for ttyV0 is as follows:

[Unit]
Description=Socat Serial ttyV0
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=socat-serial-ttyv0

ExecStart=/usr/bin/socat pty,link=/dev/ttyV0,group=dialout,perm=0666,rawer tcp:x.x.x.x:y
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

Where:

  • The definition for Wants and After, set equal to network-online.target, is intended to delay startup until after the network is deemed online.
  • The ExecStart line is per the socat definition above. Observe that I’ve redacted the IP and port I’m actually using in my system.
  • Restart should be set to always if automatic reconnect is desired. As I understand it, socat exits gracefully (status 0) when a connection closes. Setting this value to always causes an indefinite retry cycle upon disconnect (every 10 seconds, per the setting below). This has been shown to result in recovery in the event that a network device is disconnected, then later reconnected.
  • RestartSec was added to eliminate startup errors (i.e. “Start request repeated too quickly.”). The default value is 100 ms. I am not sure why this is required – I would assume that network-online.target should delay the initial startup attempt (and it should be successful). Further investigation may be required (though for now, it works).

The rest of the configuration is pretty standard for a systemd startup script, pulled from various examples.

Once these files are defined, execute the following within the folder to enable at startup:

systemctl enable socat-serial-ttyV0.service
systemctl enable socat-serial-ttyV1.service

Security

I’ve never trusted my serial to Ethernet/WiFi devices from a security perspective. Particularly as many of the units I use are EOL, there is a risk of vulnerabilities that are never fixed (lack of further firmware updates). These devices probably aren’t very secure, regardless.

To mitigate some of the risk, I always place them in an isolated VLAN. This VLAN has very limited ability to interact with outside networks:

  • no outbound access whatsoever (no Internet)
  • inbound access (from the LAN or VPN) to ports deemed necessary for operation and management

When WiFi is used, I set up a special non-broadcast SSID bound to the VLAN.

I’m no security expert, but this seems to be reasonable protection, especially for the small-scale operations I run.

Welcome

Welcome to the “new” home for Parnell Engineering. While I’ve owned this company (and domain) for a very long time, it’s taken me years to finally get to the point where I’ve started to document the sharable aspects of my projects in blog format. It’s way overdue – I’ve relied upon countless others to learn from. So while I’ve missed the opportunity to publish a lot of content over the years, better late than never!