Custom Voice Commands and Triggers in HomeOps
The built-in voice commands in HomeOps cover the most common smart home actions out of the box, but every household has its own routines, naming conventions, and automation preferences. HomeOps provides a powerful custom command system that lets you define exactly what happens when you speak a specific phrase. You can map single words to device actions, chain multiple steps into voice macros, and configure commands that behave differently depending on which room you are standing in. All of it runs locally through the same faster-whisper pipeline with no cloud processing involved.
Defining Custom Commands and Mapping Actions
Custom voice commands are defined in the HomeOps dashboard under the Voice Configuration panel. Each command definition consists of a trigger phrase and one or more actions. The trigger phrase is the text that faster-whisper must transcribe for the command to fire. You can specify exact phrases like "movie time" or use pattern matching with wildcards to capture variable elements like "set bedroom to [number] degrees."
Actions are the device operations that execute when the trigger phrase is recognized. A single custom command can trigger multiple actions in sequence or in parallel. For example, a "movie time" command might dim the living room lights to 20 percent, turn on the TV smart plug, lower the motorized blinds, and set the soundbar to a specific input. Each action is defined by its target MQTT topic, the payload to publish, and an optional delay between steps. The dashboard provides a visual action builder where you drag device tiles into a sequence, set their desired states, and configure timing.
The command system also supports aliases, which let you define multiple trigger phrases for the same action set. If some family members say "goodnight" while others say "bedtime," both phrases can map to the same shutdown routine. Aliases are particularly useful for accommodating different speaking styles and vocabulary preferences within a household without duplicating the entire action configuration.
Multi-Step Voice Macros and Contextual Room Commands
Voice macros extend single commands into multi-step workflows with conditional logic. A macro is essentially a small program triggered by a voice command. Beyond simple sequential actions, macros can include conditional checks: if the outdoor temperature is below 60 degrees, turn on the heater; otherwise, open the window vent. They can also include loops, such as cycling through all lights in a group and setting each one to a different color for a party scene.
Macros are edited in a visual flow editor within the dashboard. You drag condition nodes, action nodes, and delay nodes onto a canvas and connect them with arrows to define the execution flow. Each node is configurable with device targets, threshold values, and timing parameters. The flow editor validates the macro before saving, catching issues like unreachable nodes or missing device references. Once saved, the macro is compiled into a lightweight execution plan that runs on the HomeOps controller without requiring the dashboard to be open.
Contextual commands are one of the most powerful features of the HomeOps voice system. Because each microphone satellite node is assigned to a specific room, the system knows where a command originated. This allows you to define commands that behave differently based on context. The phrase "turn off the lights" spoken in the kitchen turns off only the kitchen lights. The same phrase spoken in the bedroom targets only the bedroom lights. You do not need to specify the room name in your command because the system infers it from the satellite location.
You can also define room-specific commands that only exist in certain contexts. A command like "start the brew" might only be active on the kitchen satellite, triggering the smart coffee maker. Speaking the same phrase in the living room would produce an "unrecognized command" response since it has no mapping in that context. This prevents accidental cross-room triggers and lets you keep command vocabularies focused and relevant to each space.
Testing and Debugging Custom Commands
The Voice Configuration panel includes a testing mode where you can type a command phrase and see exactly how the system would parse and execute it. This lets you verify trigger matching, check action sequences, and troubleshoot any issues without needing to speak aloud repeatedly. The test mode shows the full execution trace: which trigger matched, which actions were queued, what MQTT messages were published, and what responses were generated.
A voice command log is also available in the dashboard, showing a history of all recognized commands, their transcription confidence scores, matched triggers, and execution results. If a command misfired or was not recognized, the log helps you identify whether the issue was a transcription error, a trigger mismatch, or an action failure. You can use this data to refine trigger phrases, add aliases for commonly misheard words, or adjust the faster-whisper model size for better accuracy.
Tip: Start with a small set of custom commands for your most frequent routines. As you get comfortable with the system, gradually add more. Keeping your command vocabulary focused prevents trigger collisions and makes voice control feel snappy and reliable.
What's Next
Custom voice commands turn HomeOps into a truly personalized assistant that understands your household's language and routines. In the next post, we shift focus to energy monitoring, exploring how CT clamp sensors provide real-time per-circuit power visibility and how you can use that data to identify consumption patterns and reduce your electricity costs. Voice control and energy awareness together form a powerful combination for an efficient, responsive home.