Motor Current

Detect broken rotor bars in 3-phase induction motors via stator current signature analysis (MCSA)

Understanding the use case

Broken rotor bars - Squirrel-cage induction motors are the most common industrial motors. Their rotor is built from a set of conductive bars shorted at each end. Bars can crack or break due to thermal cycling, mechanical stress, or manufacturing defects. A broken bar rarely causes an immediate failure, but it degrades efficiency, creates torque ripple, damages adjacent bars over time, and eventually leads to catastrophic failure. Detecting broken bars early allows the motor to be serviced during a planned shutdown rather than after an unscheduled trip.

Why stator currents? - Historically, broken bars were detected with vibration sensors, but vibration pickups must be physically attached to the motor housing. Motor Current Signature Analysis (MCSA) uses only the stator phase currents, which are already measured by the drive or by non-invasive clamp-on probes. No extra sensor, no machine modification — the fault signature lives in sidebands around the line frequency caused by the rotor asymmetry.

Edge-side MCSA - A small RNN running on an MCU next to a current clamp can classify the rotor state continuously without ever sending raw currents to the cloud. Only the fault label is streamed upstream, which keeps bandwidth, privacy, and latency all under control. This makes MCSA a textbook target for tiny sequence models deployed on constrained hardware.

Complementary to vibration - The Motor Vibration sample solves a related problem (mechanical imbalance / bearing / misalignment) from accelerometer data. Vibration and MCSA together give a richer picture of motor health: some faults are easier to see in the electrical domain, others in the mechanical domain.

Understanding the data

Dataset - UFU Broken Rotor Bar - This sample consumes the Broken Rotor Bar dataset published by the Federal University of Uberlândia (UFU) on IEEE DataPort. It contains simultaneous electrical and vibration recordings from a 1 hp, 220/380 V, 4-pole, 60 Hz squirrel-cage induction motor (rated 1715 rpm, nominal torque 4.1 Nm, 34 rotor bars). Only the three phase currents (Ia, Ib, Ic) are used here; voltages and the five vibration channels are ignored.

Five rotor states - The classifier distinguishes five conditions: (0) Healthy - no mechanical defect; (1) BRB1 - one broken rotor bar; (2) BRB2 - two adjacent broken bars; (3) BRB3 - three adjacent broken bars; (4) BRB4 - four adjacent broken bars. Each rotor was tested under 8 load levels (12.5 % to 100 % of rated torque, in 12.5 % steps) with 10 repetitions per condition. The script prepare_motor_current.py pulls 400 windows per class for a balanced 2000-sample dataset.

Envelope preprocessing - The key design decision in the prep script is that the RNN does not see the raw 60 Hz stator currents. Instead, each phase current is passed through a moving RMS filter whose window is one half-cycle of the line frequency (~463 raw samples). This cancels the 60 Hz fundamental and leaves only the slow amplitude envelope — which is exactly where the broken-bar fault signature lives. The envelope is then decimated to ~60 Hz so a 64-timestep window covers ~1 second of motor running, capturing 2-5 periods of the 2-5 Hz slip-frequency modulation. Earlier attempts that fed the raw 60 Hz current to the LSTM failed because the fundamental dominates the signal and the fault modulation is buried in a 256+ step BPTT chain that the gradient cannot traverse.

Why the envelope trick works - Broken bars produce sidebands at fline ± 2·s·fline (where s is rotor slip, a few percent). In the time domain these sidebands appear as a slow envelope modulation at 2·s·fline ≈ 2-5 Hz. Moving RMS over one half-cycle is a cheap way to extract that envelope without FFTs or Hilbert transforms. Once the fundamental is gone, the signal becomes a smooth slowly-varying function that a tiny LSTM can learn in a few dozen epochs.

Synthetic fallback - When real UFU data is not available, the sample falls back to a synthetic 3-phase raw sinusoid generator with 4 electrical fault classes (Normal, OpenPhase, ShortCircuit, Unbalanced). Note that the synthetic path trains on raw sinusoids while the real-data path trains on envelopes — the two are visually different but share the same 3-channel, windowed, [0,1]-normalized JSON schema.

Normalization - Envelope values (in Amps RMS) are normalized to [0, 1] using global per-channel min/max computed across the entire dataset, not per-trace. Per-trace normalization would erase the absolute amplitude differences between load levels and rotor states, which is one of the discriminators. The global max is also capped at 6 A RMS to prevent residual inrush artifacts from wasting dynamic range.

Understanding the configuration

LSTM vs GRU - LSTM (Long Short-Term Memory) uses 4 gates (forget, input, candidate, output) and maintains a separate cell state for long-term memory. GRU (Gated Recurrent Unit) uses 3 gates (reset, update, candidate) and merges the cell state into the hidden state, making it simpler and faster. For short current windows, GRU typically matches LSTM accuracy with fewer parameters — a good default for edge deployment.

Hidden size - The number of recurrent units in the hidden layer. For 5-class broken-bar classification, 16 units is usually enough. Push to 32 if accuracy is low; drop to 8 for faster experiments on constrained hardware.

Learning rate - Controls the step size during gradient updates. 0.003 with Adam is a solid default for this task. If training is unstable (loss jumps around), try 0.001. If training is too slow, try 0.005.

Window size - The number of timesteps per sample. On the real-data path the prep script computes a moving-RMS envelope of the raw 55.6 kHz currents and decimates it to ~60 Hz, so a 64-step window covers ~1.0 s of motor running — long enough to see 2-5 periods of the 2-5 Hz slip-frequency modulation where the broken-bar signature lives. 64 steps is also short enough for the LSTM's BPTT gradients to flow easily during training (vanishing gradients were the main failure mode of an earlier 256-step raw-current attempt). The dropdown only affects the in-browser synthetic fallback; real UFU data always uses whatever window size the JSON file declares.

Understanding the results

Accuracy - Percentage of test windows correctly classified. Random guessing on a balanced 5-class problem gives 20 %. A trained model on UFU real data should reach 70-90 %+ depending on hidden size and epoch count. If accuracy stays near 20 %, the model has not learned — increase epochs or hidden size.

Confusion matrix - Rows = actual class, columns = predicted class. Diagonal (green) cells are correct predictions; off-diagonal (red) cells are errors. On real UFU data, expect confusions mostly between adjacent severities (BRB1 vs BRB2, BRB3 vs BRB4) because the signatures grow progressively and the transition is gradual. Healthy vs any broken-bar class should be very sharp.

Loss curve - Average training loss per epoch. A healthy curve drops quickly in the first few epochs then flattens. Oscillation = learning rate too high. Plateau at a high value = model too small or not enough epochs.

Inference time - Total wall-clock time to classify all test windows in the browser. Per-sample latency (divide by number of test samples) is the relevant metric when projecting to an MCU.

Configuration







Load or generate data to begin.

Training Log

Ready.

Data preparation

The browser sample expects train.json / test.json in packages/host/www/data/motor_current/. Generate them with the Python prep script:

# Synthetic fallback (no download needed):
python packages/dev/tools/python/prepare_motor_current.py

# Real UFU data: extract struct_*_R1.mat files into
# packages/host/www/data/motor_current/ and run:
python packages/dev/tools/python/prepare_motor_current.py \
    --source-dir packages/host/www/data/motor_current

The script auto-detects the UFU .mat format (dataset #2 in packages/dev/tools/python/README.md), decimates the ~55.6 kHz currents to ~1 kHz, windows them into 64-step sequences, normalizes to [0, 1], and writes JSON. When no source directory is provided, a synthetic 3-phase current generator produces a 4-class fallback dataset so the sample still runs offline.

Full list of public electrical-signal datasets compatible with this sample is documented in packages/dev/tools/python/README.md under prepare_motor_current.py.

Code

Build and train the RNN classifier using @spiky-panda/core:

import {
  RnnBuilder,
  RnnCellType,
  RnnInferenceRuntime,
  RnnTrainingRuntime,
  ActivationFunctions,
  LossFunctions,
  Optimizers,
} from "@spiky-panda/core";

// 1. Build the RNN (3 phase currents in, 5 rotor states out)
const graph = new RnnBuilder()
    .withInputSize(3)          // Ia, Ib, Ic
    .withHiddenSize(16)        // 16 recurrent units
    .withOutputSize(5)         // Healthy + BRB1..BRB4
    .withCellType(RnnCellType.LSTM)
    .withOutputActivation(ActivationFunctions.sigmoid)
    .build();

// 2. Create runtime and trainer
const runtime = new RnnInferenceRuntime(graph);
const trainer = new RnnTrainingRuntime(
    graph,
    runtime,
    LossFunctions.MSE,
    0.003,
    Optimizers.Adam()
);

// 3. Training loop
for (let epoch = 0; epoch < 25; epoch++) {
    let totalLoss = 0;
    for (const sample of trainData) {
        runtime.resetState();
        const targets = sample.sequence.map(() =>
            oneHot(sample.label, 5)
        );
        totalLoss += trainer.trainStep(
            sample.sequence, targets
        );
    }
    console.log(`Epoch ${epoch + 1} - Loss: ${
        (totalLoss / trainData.length).toFixed(6)
    }`);
}

// 4. Inference
runtime.resetState();
const outputs = runtime.run(testSample.sequence);
const predicted = argmax(outputs[outputs.length - 1]);
console.log("Predicted rotor state:", predicted);