This note describes how I improved the measuring system of my roasting machine to get more accurate roast profiles, like the one shown above. I got motivated by the work of Scott Rao.
UPDATE March 29, 2019: I got contacted by Marc from Yoctopuce, how informed me that they just released a new firmware v34795 for their Yoctopuce TC module with an improved sampling method. You can install this new firmware on your module easily using the VirtualHub as usual. Marc explains the technical details in a short article. I immediately tested the upgraded device again on my Idle Noise setup and can confirm that this new firmware reduces the noise level dramatically, putting it on par with the corresponding modules from Phidgets, like the TMP1101 or the TMP1100. These new results are added below, tagged with "new firmware" and marked in yellow.
Marc is already testing an updated module that adds a noise filter in hardware and should perform even better. So things are moving fast. As Marc kindly forwarded me one of their Yocto PT100 (RTD) modules I could measure that one too, which turns out to be a good alternative to the TMP1200. Results are also added below.
I could also improved the asynchronous sampling mode for Yoctopuce modules with the help of Marc, adding a 1sec* mode. This one will be available with the next Artisan version. As preview I added the idle noise results for this mode too.
I revised the table of technical details, correcting the number of decimals for the Yoctopuce TC, which is 3 and not 2 as listed incorrectly before. Finally, I added links to two articles by Marc explaining the effect of using poorly shielded USB cable (I was not aware of) and ground loops. Thanks Marc for all your support!
UPDATE March 20, 2019: I slightly edited the following paragraph for clarity.
His recent work on profile roasting turned the focus towards the bean temperatures rate-of-rise (RoR). Before Scott's work, RoR was mainly used to predict the immediate future development of the bean temperature during a roast and account for its overall speed. Some rough guidelines on "good" bean temperature (BT) RoR curves developed within the roasting community, stating that the RoR should not exceed a certain threshold nor should it fall below a certain limit to avoid roast mistakes like underdeveloped bean cores or baked coffee. In his recent post, Scott urges to take a second look at the BT RoR, suggesting among others that it should steadily decline after its initial rise (we called this Natural Roasts in 2015 as for a linear decreasing BT RoR the BT approximates a natural logarithmic curve, with an interesting relationship to chemical reactions). He noted that coffee tends to taste baked if the RoR curve showed a crash (a huge decrease), usually at begin of the first crack, and often accompanied by a flick (a rapid increase). More recently he took a look at the RoR signal computed from the environmental temperature to signal the onset of the first-crack more precisely by looking for a sudden rise.
Scott Rao on RoR
- the bean temperature (BT) RoR should constantly decline after its initial peak
- any BT RoR flick and crash, especially during first crack, should be avoided
- the onset of first crack may be indicated by a rise of the environmental temperature (ET) RoR
Scott found more ways to improve the quality of my roasts by the application of software-based roast logging. Excellent! However, on looking at my recent roast profiles I found them to either show very noisy RoR signals to a point that rendered them totally useless for those applications or extremely smoothed (depending on my software settings) and thus rather delayed. So my goal was to first improve my RoR signals before trying to prove Scott wrong (or right).
Controlling a slow system based on old data is difficult
- making control decision on past data, especially on systems reacting slow to control changes like a roasting machine, can have unintended effects
- to follow Scott's advices, a high-resolution measuring system producing a clean RoR with minimal delay is required
The Problem
The rate-of-rise signal is derived from the underlying temperature signal (mostly bean temperature or environmental temperature) by computing its first derivative. This computation unavoidably amplifies any noise of the temperature signal dramatically as differentiation degrades the signal-to-noise ratio.
Initially, I had two 3mm ungrounded Omega k-type TCs of the same type installed in my machine and connected to a Phidget VINT TMP1101, my laptop running on battery. Then I discovered that my ET got a bit noisy, so I decided to measure the idle noise at the minimal sampling interval of Artisan, which is 0.5sec. The screenshot below shows the BT RoR in blue and the ET RoR in grey on an empty machine with the motor turned off.
Obviously the thermocouple measuring ET went bad, thus around min 4 in this recording I replaced it with a brand new one. The noise was again on par with the BT RoR, but still significant. That triggered the work described here.
In Situ
First I asked some colleagues to record 2min of Idle Noise at room temperature with 0.5sec sampling interval on their machines, with and without the motors running, to get an overview on the situation in the field.
Rationals for the proposed method and settings to measure Idle Noise
- noise is most obvious on the RoR curve
- a short sampling interval maximises the noise of the RoR calculation
Note that the noise recorded at a constant temperature does not hold any useful information for coffee roasting and should be minimised. I only asked those, running systems that can deliver high resolution temperature signals with at least two decimals and can communicate fast enough to record at that high sampling frequency. Note that most systems relying on serial communication do not allow to sample that fast and a lot of systems deliver readings with none or only one decimal to the connected software producing low-resolution signals with significant digital noise.
Here are the resulting BT RoR signals rendered with all software side smoothing turned off. Note that the RoR scale is on the right side of the graph.
SF-1, 4.7mm RTD StdDev=1.36 |
Probatone II 5k, standard TC StdDev=1.95 |
Neuhaus Neotec RFB Signum, Phidget 1048 StdDev=2.91 |
Mill City 500g North, 5mm TC, Phidget 1048 (motors OFF) StdDev=2.13 |
Mill City 500g North, 5mm TC, Phidget 1048 (motors ON) StdDev=54.35 |
Those idle noise situations do not look too bad. I calculated the Standard Deviation (StdDev) per recording to quantify the noise. All those recordings are on the better side of things, compared to the terrible cases I saw before. Only the Mill City with motors on is problematic. The motors induce a considerable noise into the signal, rendering it mostly useless. For the other machines above I have received recordings with the motors on too. The additional noise induced by the running motors in those cases is not significant and therefore I did not add the corresponding charts here.
Remark: The random large spikes you see in above graphs using the Phidget 1048 did also occur in my experiments. The Phidget 1048 seems to be very sensitive to USB power voltage swings. I did not observe this on other devices.
Adding some smoothing or increasing the sampling interval can almost completely remove the idle noise for all the cases above, but for the Mill City with motors on. A delta-span of 10sec reduces the StdDev of the SF-1 to 0.16 and a delta-span of 20sec reduces it to 0.09. However, the delta-span of 30sec (the maximum in Artisan) brings the Mill City with motors ON only down to a StdDev 1.38.
delta-span
In Artisan the delta-span parameter defines the time interval the RoR is computed from and directly corresponding to the rolling-average interval, sometimes also referred to as RoR interval, that can be configured in Cropster.
Sadly, I was not happy with my own setup (after replacing that broken TC).
My old setup (motors ON) StdDev=2.64 |
Now I am. Read on!
My new setup (motors ON) StdDev=0.09 |
The (not so good) Solution
As hinted above, noise on the RoR can be reduced by applying standard signal smoothing. Almost every roast logging software provides a way to do this. Artisan allows to take a quite fine control over this by allowing to apply different parameterised algorithms to the RoR signal itself, the underlying temperature curve, or both. By default Artisan applies a linear decay averaging to the given signal, allowing to configure the length of the averaging (number of past readings to be considered). It also allows to configure the delta-span, the period the raw RoR values are computed over. Mathematically, increasing the delta-span has the same effect as increasing the smoothing on the RoR signal, just with a different parameterisation. Increasing the sampling interval also results in a smoother RoR signal and avoids additionally the production of (digital noise) resulting from a computation over a low resolution temperature signal.
RoR interval, average smoothing, sampling interval
Larger RoR intervals, the application of average smoothing and increased sampling intervals all have the exact same effects: smoother signals and large time lags!
Fine? No!
Increasing the sampling interval reduces the total amount of information we collect over a roast (think about taking readings only every 10min). Similarly, information is lost on smoothing signals (eg. by averaging values). While we want to get rid of the noise, we want to keep the information contained in the signals we record to help improve our roasts.
Information loss is a problem. An even bigger problem can be spotted looking at the RoR graphs posted by Rao or Cropster as linked above. The RoR of a signal is indicating its speed of change in any point. On drum roasters you usually see a turn-around point (TP) after charging the beans where the bean temperature curve stops falling and starts rising again. This mark is virtual as in this initial phase of a roast the reading of the bean temperature probe just lags behind the actual temperature of the beans for various reasons. The amount of this delay depends largely on the temperature probe and meter used to gather those readings. Still, if we compute from this (initially a little misleading) signal the RoR we expect at TP to see a RoR of 0 as the underlying temperature is not falling nor rising at that very moment. However, on the graphs on the Cropster post as well as on those given by Rao one sees the RoR curve hitting zero 30sec after the TP.
The RoR curve we see here is shifted to the right (into the future) and thus during roasting we don't see the actual RoR, but the RoR as it was 30sec ago (and more). Hm. How is that?
BT RoR delay indicated in orange |
The RoR curve we see here is shifted to the right (into the future) and thus during roasting we don't see the actual RoR, but the RoR as it was 30sec ago (and more). Hm. How is that?
RoR lags in time - always!
- BT RoR is by definition zero at the turn around point (TP)
- the time lag produced by computing the RoR is half of the delta-span interval
- the BT RoR time lag is the difference between that TP and where the computed RoR gets zero
- the RoR acts stronger, but never earlier to temperature changes than the bean curve as is sometimes stated
- the underlying temperature signal is also shifted by unavoidable measuring delays, resulting in a further shifts of the RoR w.r.t. to the real bean temperatures
To smooth a signal during recording (online) we take the current reading and average it with an older reading. By that we are computing a value that corresponds to a point in time, half way back to that older reading. The further we look back, the smoother our signal gets while increasing also the delay on our resulting RoR signal. The only way around this delay is to apply an optimal smoothing algorithm that takes the "future" into account and avoid this shift in time. Artisan allows to apply an optimal smoothing algorithm to both, the curve as well as the RoR smoothing, but (obviously) only while not recording (offline). Further details are given in my post on Sampling Interval, Smoothing, and Rate-of-Rise.
Online smoothing delays signals
- any smoothing applied to signals during recording introduces time lags
- the stronger the smoothing the larger the delay
- the lower the sampling rate the larger the delay
Imagine how much better the Norwegian Coffee Roasting Champion Simo Christidi (Finding and setting your perfect curve & parameters) could roast if his RoR curves were not delayed by 30 seconds (assuming a 1sec sampling interval)! Scott suggests to set the rolling-average intervals to no more than 15 seconds. I rather go without any smoothing and use a 1sec delta-span adding only a minimal delay of max 0.5sec at a sampling interval of 1sec. Read on!
An alternative Path
A software-based roast-profiling system is a complex measuring system composed of quite some parts. Taking a system perspective, one could as well try to minimise the generation of noise of the underlying signal in the first place, instead of trying to improve a noisy RoR afterwards in the roasting software.
There are many noise sources in a roast-profiling system
- ground loops
- noise caused by Electromagnetic Interferences (EMI)
- digital noise
- sensor noise
- meter noise
Ground loops can be avoided by using only ungrounded probes, isolated temperature meters or additional USB isolators. The post Addressing Electromagnetic Interference With Phidgets explains common sources of EMI and some techniques to fight them. Note that poorly shielded USB cables can be a big source of EMI noice too. Digital noise is caused by computing the RoR over low resolution signals and can be addressed by choosing meters generating high-resolution signals. Note that by signal here we have to consider both, the resolution of the temperature as well as the time signal. The time signal has usually a way higher resolution than the temperature signal in this application. Note that if we further compute on those signals, like applying decay averaging, we will have to "linearise" the readings on the time axis, as Artisan does, not to introduce computational errors (and additional noise) at this stage. The resolution of the timestamps is limited also by the operating system running the roast logging software. Most, if not all, roast loggers are running on a non-real-time operating system. They run on multi-tasking systems where system processes, among others, are running with high-priority and potentially messing up timings.
Ex Situ
I focus in the following on the remaining noise sources, the noise produced by the sensor and meter. After working on Artisan over the years I collected quite some (test) hardware including several probes and meters. Thus my goal was to choose the one combination of those that produces a minimal amount of noise. I decided to make a series of experiments to help me in my decision.
Setup
First I defined a fixed test setup that would allow me to best compare the noise produced by the sensors and meters. None of my probes are grounded and I was running all experiments with my laptop running on battery to exclude any ground loop effects (however, running the laptop powered did not change my results as some tests confirmed). I also decided to run my experiments on a table, far away from any EMI producing machines. I only compared meters that produce signals with at least one decimal to minimise the effect of digital noise.
Instead of comparing the noise of different setups generated during roasting I decided to check on their ex situ idle noise performance. Thus having the probe and meter sitting on my table, measuring a stable room temperature. I selected the shortest sampling interval offered by Artisan, 0.5sec, oversampling deactivated. With this setup I recorded exactly 2min on each setup and set the axis limits to maximally zoom into the produced idle noise. Note that this idle noise does not contain any information useful for taking decisions in roasting as the temperature was held stable during those recordings. Thus I propose to configure smoothing parameters on any setup such that all idle noise is completely eliminated. For the propose of my experiments I deactivated software side signal processing and smoothing (eg. the delta span was set to its minimum of 1sec, effectively resulting in the raw RoR values to be computed directly from successive readings).
Reactivity
Besides the idle noise, I am also interested in the reactivity of the different meters and probes. Meters usually apply some internal processing (incl. averaging) on the raw readings they receive from the measuring electronic before forwarding the results to the connected software. This processing may introduce a delay in the signal. Also the mass of the shield and the insulation material of a probe introduces a lag on the actual temperature the sensor element inside is reading. Probes with larger shield diameters react slower to temperature changes of the measured object. Finally, the technology used by the probe has an influence as well. In principle thermocouples recognise temperature changes faster than RTDs, but this is not necessarily true as there are so many different TC and RTD types with so many different shield and insulation materials. Some RTDs can react faster to temperature changes than thermocouples at the same diameter.
To get an idea of the required speed to recognise temperature changes I checked my own and published profiles for their maximal BT RoR values. As in drum roasting profiles, the first part of the BT profile is rather virtual as the probe has to catch up with the real bean temperature, I only looked at the BT RoR around DRY where usually the temperature of the probe and the real bean temperature are have equalised. I found values from 15C/min up to 30C/min at DRY which then also marked the maximum BT RoR for those profiles after that point. Thus our measuring system needs to be able to measure BT changes at a speed between 0.25 C/s and 0.5 C/s.
Don't over-engineer!
- the bean temperature in drum roasters does not rise faster than 30C/min (0.5C/s)
- a measuring system that can catch changes at this speed is good enough
- faster measuring systems catch more noise and require more filtering on the software side, generating additional signal delays
Meters
The meters I have around are mostly Phidgets and Yoctopuce modules. I also did some tests with my two old handheld meters, one from Omega and a still popular Voltcraft. Here are the technical details that I collected.
The Phidget modules allow for certain configurations. By default they are sampled synchronously, once per sampling interval. However, Artisan can also be configured (see the Phidgets tab in of the Device Assignment dialog) to sample the modules asynchronously at higher rates, averaging all the values received per sampling interval. This additional averaging increases the number of decimals per reading by at least one decimal. However, there is a trade-off in that higher sampling rates produce more noise on the hardware and demand more power. Adding additional power, e.g. via a powered USB hub, can introduce additional noise and even ground loops if not carefully engineered equipment is used.
Remark: there is a difference between accuracy and precision. We are concerned here about precision limited by the resolution of our digital measuring system. The accuracy is usually good enough and mostly depends on correct probe placement.
Probes
There are RTDs and thermocouples (TCs). Both have their advantages and disadvantages. Just a quick recap.
RTDs
- accuracy: high (especially the 4-wire variants; well-defined accuracy classes)
- linearity: high
- stability: high
- noise immunity: high
- vibration sensitivity: low (not for all, but for the now common thin-film variants)
- response: slow (taking sensor only, excluding the sensor shield)
- costs of sensor and measuring instrument: high
- minimal shield diameter: ø3mm (thiner ones with ø1.5mm and faster response time exist, but are not common)
- element itself cannot be bent, but a probes shield can be bent away enough from the tip where the element sits
TCs
- accuracy: low (but not too important for roasting)
- linearity: medium (compensated within the meters)
- stability: low (TCs start to drift rather soon)
- noise immunity: low*
- vibration sensitivity: low
- response: fast (grounded TCs are very fast, but produce ground-loops in our application, ungrounded TCs are slower and comparable to RTDs)
- costs of sensor and measuring instrument: high
- minimal shield diameter: ø1mm
- element itself is very thin and thus careful bending of the shield is usually possible, but might break the internal insulation material
* With signal levels from thermocouples measuring only microvolts, noise pickup can be a problem. Noise from stray electrical and magnetic fields can be orders of magnitude higher than the signal level. See the Mill City case above.
Remarks:
- TCs as RTDs should never be spliced and one probe should never be connected to several meters at the same time. If done, this usually results in wrong readings on both instruments. Dual probes that integrate two sensors in one sensor shield are a valid alternative. Further, TC wires can only be extended using the specific TC extension wires matching the TC type. If not done right, readings will be off.
- The common wisdom that RTDs have a slower response time than TCs is misleading. While this might hold for the sensor element itself, the speed of a probe is usually dominated by the thermal mass of its shield and its isolation material. Thus thinner probes usually have a faster response time than thicker ones independent of the type of technology used. As common diameters of RTDs are larger than those of TCs, they also respond slower to temperature changes.
- Probes featuring shields of smaller diameters break faster.
- The advantageous effect of fast (analog) signal smoothing provided by the thermal mass of a probes shield is often overlooked. Instead high-noise thin probes are sometimes combined with heavy software-based signal smoothing to reveal a useful signal, with the drawback of a significant signal delay that could have been avoided by using probes with shields of higher diameters.
The probes I had available for testing were the following.
- ø3mm/10cm PT100 RTD (4-wire, 1/3 DIN, 3850ppm/K)
- ø3mm/15cm k-type TC (Omega KTSS-HH; old)
- ø3mm/10cm k-type TC
- ø2mm/10cm k-type TC
I did a quick test to get an idea of the differences in response speed of those probes by preparing a glass of hot water and dipping all the probes in at about the same time and removing them together again after a moment. I connected the 3 TCs to a Phidget TMP1101 and the RTD to TMP1200 sampled at a 0.5s interval with the following result.
Clearly the 2mm TC is fastest, but the 3mm RTD does not perform that bad. Obviously different amounts of water remaining on the probes after removing them from the water glass cooled them down at different rates and dipping 4 probes at exactly the same moment into a glass of water is difficult.
But before focusing in detail on the speed of the probes I wanted to check for the noise produced by the meters. I also checked the noise produced by the probes, but couldn't detect any significant differences in this Ex Situ situation (I assume that the TCs will pick up more noise from nearby by motors than that RTD, but this has to be evaluated separately).