Re: Ep 1: Home Automation System Architecture
Posted: Thu Apr 26, 2012 4:05 am
Hi @mlinnen, interesting comments about using Azure. Personally I wouldn't tie myself to infrastructure that relies 100% on an Internet connection (or, indeed, any services beyond my direct control) to be operational so I agree that would be problematic.
As you say though there are benefits to using those sorts of services, so the trick is getting those benefits without making them a point of failure.
The way Andy (@geekscape) and I are approaching that is using local MQTT servers as the primary point of reference, but "federating" the servers across multiple locations. The ultimate plan is to have:
* Multiple MQTT servers running in the local environment with automatic failover, providing a reliable low-latency messaging system.
* Federation between local and remote MQTT servers, providing exposure of a restricted set of data outside the local environment.
* Segmented access control access the federation infrastructure.
We already have MQTT servers running at our own houses, plus an external MQTT server on a VM running at a hosting provider in the US. We've already achieved federation across multiple MQTT servers, including the application of filters: for example, I can publish temperature sensor data once per second to a topic on my local MQTT server, while having that topic shared to a remote MQTT server but only updated once per minute. The device publishing the sensor data doesn't need to know about that: it simply publishes to the local MQTT server as fast as it wants to, and the filter in the federation mechanism takes care of rate limiting the updates published to the remote server. The result is low latency updates locally where it matters, without blasting out unnecessary data remotely while still getting the benefit of making that data available outside my local system.
There's a whole lot of interesting stuff going on that I'll cover in more detail in a future episode, and of course there's lots still to figure out.
--
Jon
As you say though there are benefits to using those sorts of services, so the trick is getting those benefits without making them a point of failure.
The way Andy (@geekscape) and I are approaching that is using local MQTT servers as the primary point of reference, but "federating" the servers across multiple locations. The ultimate plan is to have:
* Multiple MQTT servers running in the local environment with automatic failover, providing a reliable low-latency messaging system.
* Federation between local and remote MQTT servers, providing exposure of a restricted set of data outside the local environment.
* Segmented access control access the federation infrastructure.
We already have MQTT servers running at our own houses, plus an external MQTT server on a VM running at a hosting provider in the US. We've already achieved federation across multiple MQTT servers, including the application of filters: for example, I can publish temperature sensor data once per second to a topic on my local MQTT server, while having that topic shared to a remote MQTT server but only updated once per minute. The device publishing the sensor data doesn't need to know about that: it simply publishes to the local MQTT server as fast as it wants to, and the filter in the federation mechanism takes care of rate limiting the updates published to the remote server. The result is low latency updates locally where it matters, without blasting out unnecessary data remotely while still getting the benefit of making that data available outside my local system.
There's a whole lot of interesting stuff going on that I'll cover in more detail in a future episode, and of course there's lots still to figure out.
--
Jon