All posts by Lewis

Home Automation with Hive Active Plugs

Problem

I’ve had a little bee in my bonnet for the last few years with a “feature” on my clothes dryer. It’s an auto-sensing dryer that dries clothes to certain levels such as Ready to Iron, Ready to Wear and Extra Dry. Now, this is obviously a good thing since it only dries the clothes as is required rather than for a set time where the clothes might be damaged or that we’re wasting electricity. This is all well and good apart from the anti-crease feature of the dryer that causes it to beep with a stupid tune, really loudly, every 30 seconds…..FOREVER. Then every five minutes it would tumble for another 10 seconds or so to (supposedly) stop the clothes creasing.

Now, obviously the point here is that the incessant beeping is supposed to annoy you so you go and turn it off to stop wasting electricity and get your clothes out while they don’t look like a Shar-Pei, but there’s one oversight – you can’t put a load in the dryer on any of the auto-sensing modes before going to bed. If the dryer was inside the house, you’d be woken up by the incessant beeping every 30 seconds. In my case, the dryer is in the garage so it’s likely to wake the neighbours up if they have their windows open. Not great for neighbour relations.

I toyed with ideas like using a Raspberry Pi with a mic and writing some code to listen for and recognise when the stupid beeping started but writing software in node.js for a Fast Fourier Transforms just isn’t in my skillset.

Partial Solution

I initially bought a timer plug that would switch on and off but this was a poor option since it basically spent most of its time turned on. I decided to see if it was possible to find a countdown timer plug and, to my surprise they do exist but they only do preset times like 15 minutes, 30 minutes, 1 hour, 2 hours, 4 hours and 8 hours. You can’t set a specific time. Usually, a full load in the dryer takes just over an hour to get to Extra Dry so we would have to set the plug to 2 hours, meaning it would beep, every 30 seconds, for 40-50 minutes. I’ve lived with this solution for probably two years now.

Solution

About a year ago, I bought a Hive Heating system. I checked out Nest and Tado and other competitors but the Hive 2 looked (aesthetically) good, I didn’t need to have the Thermostat placed in a (mostly) fixed location and it ran on batteries to boot. If I’m spending the evening in the front room watching TV, it might as well be in the room with me, keeping the temperature in that room at the level I set. I’ve been VERY happy with the Hive solution and, as a bit of a tinkerer, knowing they had an API I actually started to develop a PowerShell 5 class that wrapped the API which I called PoSHive. PoSHive allows control over the heating system using simple typed commands. It has its limits – it doesn’t yet support hot water or multi-zone systems but those will come when I can have access to a Hive solution that controls Hot water and Multi-zone as well as the Heating. Anyway, I digress. Hive is awesome.

So, a few months back, Hive announced they were extending their smart heating and hot water system in to a fully-fledged Home Automation ecosystem by introducing sockets, window and door sensors as well as light bulbs – all smart so they can be controlled with an app…and quite likely, PoSHive after some more additions. Initially I didn’t buy a Hive Active Plug simply because I couldn’t think of a sensible use for it but then we had a mini heatwave in May 2017 that made me buy a fan and, being a lazy sod I decided to stick an Active Plug on the order as well so I could turn it on and off as desired.

Once the plug arrived I tinkered with it a bit and worked out how the app communicated with the API so I could incorporate the HTTP calls in to PoSHive. As I was writing the methods and updating the class, I saw that the plug was reporting a powerConsumption property. I knew that my fan was a 30W fan and the values reported by the powerConsumption property looked very much like watts. A (smart) lightbulb came on.

Requirements

If the Active Plugs were reporting the power consumption through a property of the plug, that means I know when the device attached to the smart plug is consuming power. If I know when the dryer is consuming power to dry the clothes, I also know when it isn’t and therefore have a way of knowing that it is finished. I had written the code for PoSHive already, I just needed a script that would regularly monitor usage and let me know when it was done. I wrote the script using PowerShell and of course PosHive but needed a sensible way of triggering it. My fiancé isn’t about to turn on a computer and run a PowerShell script just to see when the dryer has finished and phoning me to do it equally wasn’t going to cut the mustard.

I started out looking at Azure Function Apps using PowerShell code which looked like a decent solution up until I saw they were only able to use PowerShell 4.0. Classes are only supported by PowerShell 5 so Azure Functions were a non-starter. One of the articles I read mentioned if you require PowerShell 5.0 to use Azure Automation.

I’d already used Azure Automation fairly successfully with PoSHive to do very simple things like telling me the temperature inside the house, but at the time I couldn’t really think of a use-case beyond proving PoSHive worked inside Azure Automation. Azure Automation is supposed to be run headless and is very much asynchronous – run the script and it could take anywhere up to 10 minutes to actually start so for synchronous calls and results Azure Automation wouldn’t work very well. If anything goes wrong you check the output on the portal. Given that the script just checks the plug regularly and then turns it off when it’s no longer in use, Azure Automation is perfect.

As I mentioned, I needed a way of triggering the script without logging on to Azure – if I had to do that, I might as well just run the script on my computer. I knew that Azure Automation also supported Webhooks, allowing me to use a simple HTTP POST to trigger the PowerShell Runbook on Azure without touching the portal

So, I have my solution:

  • PoSHive Class – I publish this to the PowerShell Gallery to make installing easy for anybody on PowerShell 5+: Install-Module PoSHive
  • Azure Account with an Azure Automation resource.
  • Monitoring script written in PowerShell
  • Hive Active Plug – obviously it must be attached to your system and working.
  • Method of sending an HTTP POST – there are many.

Putting it all together

Firstly I installed the PoSHive class in to the Azure Automation account.

I create a Credential variable inside my Azure Automation account which holds my Hive username and passwords securely, rather than them being in the script. I also have a Sendgrid account on Azure that I’ll use to send an email within the script. Again, the credentials are stored in a Credential variable.

I add the PowerShell script as a runbook to the Azure Automation account and ensure it is published.

I then add a Webhook to the Runbook. It’s important to save the URL of the webhook since once you click OK, you’ll never see it again. This URL will be used to trigger the runbook instead of having to log on to Azure and click Run on the Runbook. Webhooks do expire after 1 year so be aware of that.

The script used for the Dryer-Monitor PowerShell Runbook is as follows:

Triggering

The last piece of the puzzle is a simple method of triggering the monitor to activate. Yes, you can just do it from the Azure Portal but then there would be no need for the Webhook. Instead I decided to use an app on my Android phone called HTTP Shortcuts. It allows me to assign an HTTP action to a shortcut I can place on my Android home screen. A single press will send an HTTP POST to the Webhook, the Azure Automation Runbook will trigger and that’s it. Install the app, paste the URL in, change the method, click the tick then, if desired, create a shortcut on your Home screen.

Now, because the method of triggering the Webhook is a simple HTTP POST, you can use a million ways of sending that request. A hacked Amazon Dash button, a Raspberry Pi with a physical button or even using IFTTT and Maker Webhooks. Infact, I have done that with Google Assistant and Maker Webhooks on IFTTT so I just have to say “OK Google, monitor the dryer.” and the POST request is sent, causing the Runbook to trigger. I don’t even have to lift a finger.

Once a load has gone in the dryer, I trigger the Webhook on my phone or using Google Home or Google Assistant and once the dryer has finished, the script turns the plug off to stop the beeping and sends me an email to say it is done. I’ve made my dumb tumbledryer as smart as anything else on the market today. Out of interest, here’s what the Output of the script looks like from Azure Automation – I can plot power consumption if I wanted.

-Lewis

Running Azure PowerShell Workflows without Azure Automation

While developing some Workflows recently I was frustrated by something that was seemingly so simple to resolve. I’d log in to Azure and then run my parallel Workflow to perhaps start up a set of VMs in a Resource Group and get slammed with a boat load of errors.

This is the code I was running – it is specifically not designed to run in Azure Automation because I don’t want it to. Running this in Azure Automation requires a significant number of steps that I don’t want to perform so I’m not using the RunAs account. It’s a few lines of PowerShell designed to start up the VMs in a resource group quickly.

This would just blow up. The problem was clear enough, as a child of the parent, the parallel execution has no knowledge of my account being logged in and I just wasn’t sure how to resolve that. Many moons ago I tinkered with using Save-AzureRmProfile  to save me from having to log in every time I wanted to do something in Azure but I fell out of love with the solution since the saved profile never seemed to last very long. After some Googling, I happened upon an issue report in GitHub for Azure PowerShell that seemed relevant.

The issue report was answered by Mark Cowl who advised that the child job has no knowledge of the profile and, by passing in the profile to the child job in the right place by using Select-AzureRmProfile , you could work around the problem. The lightbulb came on and I altered my script and Workflow to:

Simple when you think about it.

You’ll notice in the foreach -parallel there is a throttlelimit set. I have my reasons for including this, yours may differ so feel free to remove the throttle altogether or alter it appropriately.

-Lewis

 

Set language, culture and timezone using PowerShell

As much as I’d like to say this is a perfect one-liner solution, it isn’t. It doesn’t actually count as PowerShell either, technically. This article is written for people deploying from Marketplace Azure images that always get the US locale, even though you need it to be GB – clearly if the image you deploy from has been customised or is already based on the en-GB image of your OS, then you probably won’t have these problems but when you deploy from the Marketplace image, you get en-US settings.

I’ve seen the cmdlets introduced in Windows 8 and Windows 2012 that allow you to set the language, culture and timezone but these are lacking. The primary issue is that they do not contain any method to affect the Default user account, so any time a new user logs on, they get en-US language settings, which for anyone in the UK is a huge pain. The other issue is compatibility. Not everyone is ready to deploy Windows Server 2016 or even Windows Server 2012 and so are stuck on Windows Server 2008 R2, for better or worse.

If you’re deploying in Azure, Microsoft provide a market place image that makes life very easy, instead of rolling your own image and deploying from that. While Managed Disks are the first step to removing the frustration with handling your own custom images, they’re brand new and with anything new, it takes time to bed in and become part of a well-honed process.

With all this background covered, there are, undoubtedly full PowerShell solutions that will achieve the same as I’m about to give but they are also unquestionably more complicated to deploy and maintain.

To resolve the issue of deploying from Azure Marketplace images that have en-US set as default across the board, I implement a CustomScriptExtension for every VM build. I won’t cover the detail of the approach here, but I will provide a link to an Azure article that I contributed to that covers using Custom Script Extensions, specifically with private Azure Blob Storage. – I sent the pull request to update the document about 5 days ago but it’s not been accepted yet. The PR is here if you cannot see any mention of my request in the article yet.

The solution I implement requires two files to be downloaded from private Blob Storage, one is a PowerShell script, the other, an XML file which is where the language settings are. The CustomScriptExtension then runs a command line which invokes the Powershell script and changes the language settings and timezone, among other things.

I’ve deployed thousands of Azure VMs and this CustomScriptExtension has been instrumental in saving days of manually changing settings. The following code is the PowerShell (well, OK, there’s some PowerShell in there) to initialise the VM with language, locale, culture and timezone as well as format/prepare any RAW data disks – which you might not want to do based on the requirement or not for Storage Spaces/striping.

You’ll notice that line 2 actually calls control.exe and imports an XML file called UKRegion.xml. Here’s the contents of that file – this is specifically coded for UK (en-GB) language, home location etc. so if you’re elsewhere in the world, you’ll need your own location’s codes and what not. Check out this MSDN article for more information. https://msdn.microsoft.com/en-us/goglobal/bb964650

If you’re not trying to automate Azure builds and tinkering with JSON ARM templates isn’t required, then you need only the PowerShell and XML as shown above. Simply copy the PowerShell code and XML code in to two separate files (named appropriately), drop them on to the system you want to update and run the PowerShell. If you don’t want the disk format section, remove it. I should warn you that you DO need a reboot to fully apply the settings to the whole system.

For those of you doing Azure based deployments and want to take advantage of the above during deployment using a CustomScriptExtension, the resource I use in my JSON Azure Resource Manager template looks similar to the following – I actually have all of my inputs parameterised but you’ll want to see the information I’m actually passing in so I’ve manually edited the resource below, substituting in the values that would be passed from the parameters section (which isn’t shown below!). There is also the Set-AzureRmCustomScriptExtension cmdlet if you’re building with Azure PowerShell.

“Why not use DSC!” I hear you scream. Absolutely, yes, you can definitely do that but I can tell you for free that you will reach a fully configured solution that you can log on to and start using far quicker with this method than you will if you use DSC which might have to install WMF, reboot, compile the MOF, configure LCM and then apply the config. It’s all about the right tool at the right time and for simple[?!] language settings and formatting disks, a CSE is the way to go as far as I’m concerned.

-Lewis

Clean up orphaned Azure resource manager disks

Firstly, let me apologise, this post isn’t about automatically cleaning up disks in Azure Resource Manager storage accounts, instead it’s about obtaining the information which you can use to understand which disks are orphaned and can or should be deleted.

When deleting a virtual machine on its own, I’m sure you’re aware that the disks are not automatically deleted as part of that process. The recommended best practice is to put every resource that shares the same lifecycle in to the same resource group which can then be deleted as a single entity, removing virtual machines, storage accounts, NICs, load balancers etc in one hit.

In some cases, there are occasions where it’s preferable to delete a single virtual machine on its own – doing this leaves orphaned disks behind in your storage accounts, gathering dust (and costing money!) unless you manually delete the disks yourself. We all know that as we stand up and delete VMs in Azure, there are occasions where cleaning up just isn’t convenient and the disks get left and forgotten about. Many moons later we decide it’s a good time to clean up those orphaned disks but finding them manually or using Storage Explorer is a painful process when there’s potentially hundreds of storage accounts in a subscription.

Now that Managed Disks have gone Generally Available, this problem will occur less and less for people using those but for now, most people would benefit from understanding their VHD/storage account layout and usage.

I wrote the following PowerShell script to iterate over all (or some of the) storage accounts in a subscription, pulling out the details of any “blob” in any container, ending with .vhd. Rather than trying to identify unleased blobs and automatically deleting them (which is very dangerous when you have Azure Site Recovery protected objects) I chose to simply generate an object containing the information of every blob (ending with .vhd). For those of us happy using Powershell for filtering, we can then filter the object as we see fit or just output to CSV and filter in Excel for example.

You can target this to all storage accounts, some (via a regex match or like condition) or a specific account by adjusting the Get-AzureRmStorageAccount  switches which act as the primary filtering mechanism. This script is non-destructive, it DOES NOT delete anything.

In my own tests, operating against ~90 storage accounts in a  single subscription, retrieving data on ~600 VHD objects, the information was populated in to the object and a CSV created in ~50 seconds. You can of course include additional information in the object by adjusting the object properties accordingly but I would highly recommend that you do not use any cmdlets inside the foreach loop, instead get the source information outside of the loop and then filter for that information from the object inside the foreach, like I do with $StorageAccount.

You can use the CSV to identify objects that are surplus to requirements, badly named, in the wrong account/type and act on the information as you desire, but don’t be tempted to auto-delete anything from its output unless you’re supremely confident you know what you’re doing and especially not if you use Azure Site Recovery to protect on-premises machines…doing so is a Bad Idea™

Before I disappear and as a small bonus for reading this far, in my quest to gather this information elegantly, and before I properly investigated the ICloudBlob object’s properties, I also put together the following function which, given an HTTPS endpoint and the storage account key, uses the Azure Storage API to get the properties of a blob as response headers from an HTTP request. If nothing else it shows how to interact with the Storage API via WebRequests in PowerShell and construct an authenticated request.

As a quick update, the above function can also be updated to easily become a replacement for Get-AzureStorageBlobContent  by changing the method to GET and updating the return object. Why would you do this? Well, in my scenario I might be using Azure Automation and don’t want to download a file to then open it and read its contents. Get-AzureStorageBlobContent  insists on downloading the file and I don’t want it to do that, the below function uses the Storage REST API to get a file’s contents as an object – obviously not something you’d do with a VHD but a CSV or JSON block blob for example isn’t large, is parseable and sits in an object nicely.

-Lewis

Censoring the UK – no porn please, we’re British #DEBill

Won’t somebody think of the children!

So they’ve finally done it. The UK Government has announced that they’re planning on censoring access to tens of thousands of websites featuring “unconventional” pornography and erotica, all in the name of saving the children. The inference here is that anything unconventional is considered illegal and must be blocked. Being gay was unconventional in the 40s and 50s, so people who were gay were persecuted and prosecuted. The man who helped end World War 2 – Alan Turing – was gay and he eventually took his own life after being convicted of the “crime” of gross indecency based on his relationship with a young man.

Even as consenting adults – ADULTS! – the Government believe it is somehow their right to tell us what is conventional and therefore acceptable. It is delivered under the guise of protecting children when in reality, its purpose is to define what they consider to be acceptable/conventional by censoring. If it were really the Government’s stance to protect the children, they should be educating, not censoring. I don’t mean educating children, I mean educating adults. It is no longer acceptable that in the Internet Age there are adults out there with children who have no grasp whatsoever of how to protect THEIR OWN CHILDREN online yet provide them with iPads for Christmas.

Investigatory Powers Act 2016

A few days ago I was astonished as the IPBill made its way through parliament with barely a whimper from UK citizens. Seemingly there’s not enough people in the UK who care about their online privacy and don’t mind that the likes of the Food Standards Agency and their local Fire Service can now browse their Internet history. After all, if they’ve got nothing to hide, they’ve got nothing to fear, right? For now that may be true but do you implicitly trust every person in the Civil Service that will now have access to your Internet history? If you have ever cleared your browser history – and I mean EVER – then you should have stood up to be counted as someone who opposed the #IPBill. That it has passed and is now all but law means you’ve lost your chance.

Consider a future where the Investigatory Powers Act 2016 has been in use for years:

You have a son who you know is gay though it is a closely guarded family secret because he doesn’t want it to be common knowledge. He has used the Internet in your home since he was a teenager when he came out to you. He has expressed an interest in joining the Fire service. He’s physically fit and the local Fire service are in desperate need of new recruits but his application is turned down. On pressing for a reason he’s told that he is incompatible with the Fire service and they won’t be progressing his application. You take up the challenge on your son’s behalf and speak with the Fire service to ask for an explanation. You are told that unless you want your son’s sexuality to be discussed in public, you should withdraw your challenge. How could you ever approach this without exposing your son’s sexuality – something that is very private to him?

The point I’m trying to make here is that while you think your Internet history is boring, the Government don’t think it’s boring – otherwise they wouldn’t want access to it. You have absolutely no idea how that information will be stored, accessed, by who, when and for what reason. This is not the only example I could give – consider an Ashley Madison style hack and data leak of billions of internet connection records. In the days after such a leak, the number of people committing suicide and filing for divorce would skyrocket and the impact would be devastating. You may think that what I’m saying is all very “tin-foil-hat” like but can you honestly say that this could or would not happen? If you think you can, I’m telling you you’re wrong, very very wrong.

I work in IT (having worked in both public and private sectors) and I have zero confidence in any private company or public entity storing personal information regarding my personal browsing habits. I know that one day, that information will be leaked – so I must take steps to protect myself from Government monitoring.

Digital Economy Bill

As for porn blocking, the Government are entirely reliant on the public’s apathy and embarrassment in raising issue with their proposals and they know nobody cares enough to stand up and be counted for fear of being labelled a pervert, or worse.

When I found out about the Government’s proposal to ban unconventional pornography, I mentioned it to my fiance. Her reaction was absolutely priceless and is exactly what the Government are relying on to prevent much scrutiny, in both parliament and public. Scrunching up her face, seemingly disgusted that I should be concerned about something like this, she said: “Why do you care? Are you in to unconventional porn?” – the simple fact that I had raised it as a talking point was enough for her to form an opinion of me, despite knowing me for nine years and accepting my proposal.

We are all private people and we don’t want anybody other than our sexual partners to know what our sexual preferences are. This attempt (and I sincerely hope it is only an attempt!) by the Government is to route out or silence people with “unconventional” sexual preferences. If they succeed in introducing censoring of “unconventional” pornography, they will only succeed in forcing the individuals that have “unconventional” sexual interests underground, making them and the pornographers criminals in the same way that Alan Turing was considered a criminal, causing him to take his own life. Alan Turing is a man now celebrated in film and revered in the intelligence services – the same services that are now gathering as much personal information as possible on every person in the UK. Not only have they put in place the means by which they can identify people accessing “unconventional” pornography, they are making it illegal so as to enable them to prosecute it.

I am so bitterly disappointed in the Government and so bitterly disappointed in the public that I have essentially lost all hope for Britain. The Government’s proposal to introduce age verification was doomed to fail from the moment it was suggested. They knew that implementing age verification would fail for many sites that don’t care about UK law so the Government had a reason to suggest blocking non-compliant sites. In order to block any site that doesn’t introduce age verification they will need to whitelist, which absolutely WILL affect other sites not related to pornography.  To have a whitelist they must classify pornography and decide what is acceptable and what is unconventional. Make no mistake, this is the Government telling you what is normal. Remember, the type of content they want to censor is not illegal but they’re proposing to make it so and any site that doesn’t introduce age verification for UK users will also be blocked – whether their content is “unconventional” or not.

Surveillance controls and absolutely surveillance controls absolutely.

Surveillance controls and absolutely surveillance controls absolutely. We are at the slippery precipice of a very very dangerous fall and if this amendment to the Digital Economy Bill passes through unscathed, blocking of “unconventional” porn is only the start. Do not sit down and tell yourself it doesn’t matter to you or that your porn interests are conventional. It doesn’t affect you but it does affect hundreds of thousands, if not millions of other law-abiding normal people.

Stand up, be counted and oppose censorship in the Digital Economy Bill by writing to your MP and telling them to oppose the amendment. Censoring helps nobody.

PowerShell for Seasons

I wrote this little snippet to assist me with working out the current season (as in Spring, Summer, Autumn and Winter) with an accuracy of +/- 2 days. No, it’s not a method of calculating the exact time and date that the equinoxes will occur in a certain hemisphere but it’s good enough. I should mention that this is for the Northern Hemisphere so if you want one for the Southern Hemisphere, it should be pretty simple to work out what to change! The function will tell you the current season (if you don’t provide a date) or the season of a certain date you give it.

I looked at Wikipedia to see the next dates of the Equinoxes and they all fall on or around the dates below with an accuracy of +/- 2 days. If you want hyper-accuracy, don’t use this.

 

Using Azure DNS for dynamic DNS

As a long time user of DynDNS and a generally happy customer, I’ve paid for their dynamic DNS services to provide my regularly changing home IP address to keep my website up and running using a combination of a rather badly named DynDNS domain zone with CNAME records for my actual domain resolving the DynDNS zone.

I’ve avoided taking up their rather expensive Managed DNS services at $7 per month to provide a very simple service but a recent job I’ve been working on has had me looking at all of Microsoft’s Azure services in detail. Despite it being in preview, I decided to take a closer look at Microsoft’s Azure DNS to see what it can do.

In essence, Microsoft Azure DNS allows you to host your domain zone and records in Azure but not to purchase a domain. For example, to use Azure DNS you must use a domain registrar to purchase a domain (or have one already) and then delegate the zone to Azure DNS by altering nameservers of the zone to Microsoft’s. This is usually very straightforward to do but differs between registrars. My domain registrar is 123-reg so their nameserver update page looks as follows. Ignore the nameserver settings here, yours are likely different.

123-reg-nameservers

I’m not going to go through the process of signing up for Azure etc. There’s plenty of documentation out there covering the process in graphic detail.

As you know from the title of this post, the reason for doing this is to implement a dynamic DNS solution for your own domain using Azure DNS. Typically, for static IP addresses this is completely unnecessary but in my circumstances, where I host my personal website from my own home network and have a frequently changing IP address, it’s a necessity. Compared to DynDNS’ managed DNS services at $7 per month for their lowest bracket ($84 per year!), Azure DNS is an absolute STEAL, charging only 15 pence per zone, per month for the first 25 zones. This pricing represents a 50% discount over production prices while the service is in preview but even then, if I had 5 zones hosted in Azure, I’d spend less than 25% on DNS costs compared to DynDNS.

Just to re-iterate,  Azure DNS is in preview for now. For my purposes (personal use for my own blog) it’s fine. If people can’t reach my site for a short while, it isn’t the end of the world – that said, I don’t expect that to happen.

So, back to it. We’ve got a domain and we have delegated it to Azure DNS by changing the nameservers at our registrar’s site. So how do we use this for Dynamic DNS? Azure DNS exposes three methods of configuration, unfortunately a REST API isn’t one of them but PowerShell is. A short while ago I created a PowerShell script that could be used to update DynDNS records using pure PowerShell so this can be used as a template for integrating with Azure DNS. The trade-off is the need to run a PowerShell script from inside your network. I’ve no doubt there’s a way to integrate this with Azure Automation with Webhooks and it may be a little extension project I use to expand on this article in the future but for now, this is a script that must execute from inside the network with the IP address you will use to update Azure DNS records.

The process itself is very simple; get the current IP address of your Internet connection, compare it with what’s set for the A record in the Azure DNS zone and if it doesn’t match, update the record in the zone. Easy really.

Here’s a script that’ll get that done.

Initially it starts out by obtaining the Internet connection’s IP address from the router, you could just as easily do this by obtaining the IP from something like my own tool. https://showextip.azurewebsites.net/ – as it is, asking my own DD-WRT router is less expensive.

I then compare the current IP address with the value currently set on the record I’m interested in – in this case lewisroberts.com  – just the apex/root record, not www .

If it isn’t the same, I bounce on to Azure DNS and forcefully update the record and enforce a 60 second TTL.

Using this method I can avoid paying for DynDNS services at all and rather than using CNAMEs for my domain names, I can use actual IP addresses.

– Lewis

Hive Active Heating PowerShell Control with PoSHive

Last week I announced PoSHue, my PowerShell 5 class for controlling and scripting Philips Hue lights – this week sees another announcement along the same lines.

I recently bought a Hive Active Heating system to remotely control my home’s heating and thought it would be pretty cool to be able to access that same level of control (and automation) using PowerShell.

PoSHive-icon

PoSHive is the result. It’s another GitHub project meaning anyone can get a copy, fork it, branch and contribute.

UPDATE: It’s now also available from the PowerShell Gallery.

powershellgallery

Install by simply using:

install-poshive

When you want to ensure you have the latest version:

Why would I want access to Hive using PowerShell? The purpose of this class is to enable you to use PowerShell (v5) scripting to exert more powerful logic control over the state of your heating system. In its basic form, it allows you to set the heating mode of the system and the temperature, including the Boost option, Holiday mode and even to advance the system to the next event. PoSHive offers most (if not all) the features exposed by the app and website but using only PowerShell.

I realised a different benefit for PoSHive as I was thinking about use cases and realised that it opens the Hive Ecosystem (heating only) to those with disabilities that means it’s hard or even impossible to use an app on a phone or the website simply due to impaired sight or the fine motor control that you may need to use a mouse or point with a finger. The app offers disabled users the chance to simply type commands and have the heating system respond to them. No fiddly apps or websites to contend with.

Here’s a basic example showing how to get the current temperature recorded by the thermostat. The first 4 lines of this script can even be included in your PowerShell profile so all you need to type is $Hive.Login()  and you’re logged on.

poshivebasics130

 

Please check out the project on GitHub and contribute if you can or just let me know how you use PoSHive in your script through the comments section below.

The project is not sanctioned by or affiliated with British Gas in any way and is based on API data formats and responses I’ve observed for my own Hive Active Heating system. The class is designed to work only with the Heating only Hive system (I don’t have the hot water system unfortunately) and is likely also not to work with Multi-zone Hive systems.

-Lewis