Skip to content
Home > Office Life: Post 2 – The Evolution of Security and Patching in NHS Trusts

Office Life: Post 2 – The Evolution of Security and Patching in NHS Trusts

Windows Update
Reading Time: 10 minutes

From the very late 1990’s up until the time I retired, I worked in an office. I chose a different career path that involved sitting in front of a computer doing things with other computers and servers for the British Healthcare system: the NHS.


Post 1 covered how the NHS Trusts were financed.

This post covers security from an IT department standpoint and all the things that it entailed. Such as patching of computers, switches and edge security (firewalls).


Windows update

Starting with something I would expect most people who own or use a computer would probably be familiar with: updates on their Windows computer. Every month or so, that little pop up message invades your screen asking you to restart to apply updates (which you probably ignore). The message on the screen when you do eventually shut down asking you not to switch off you computer as updates are being applied. Yes, all that malarkey.

Bitten. And bitten hard!

When I started my career in IT (in 1999), I knew very little (actually nothing) about Windows updates. All the servers at work were Windows NT4 Standard and the PC’s were all NT4 Workstation Standard. They (at the time) did not use Windows updates. My home PC on the other hand was Windows 98 (upgraded from Windows 95) and there had been some talk around this Millennium bug thingy in 1999, but that only applied to Windows 98 anyway. And so I forgot all about it.

I started my IT career in February 1999. I nearly ended it in May 2000.

It was the Friday 5th May 2000 that I very nearly said “no” and walked away from IT. That was the day that I experienced first hand my very first computer virus. It was a worm called “ILOVEYOU” and it was a .vbs attachment embedded in an email message.

This was my first job ever in IT. I was working at the local Health Authority looking after no more than five or six servers (an email server was one of them) 100 users and about 80 user desktop PC’s. Most email then was sent using a protocol called X400, rather than the now ubiquitous SMTP. In hindsight, that probably saved us from too much “damage”, as X400 email addresses were long and complicated. Therefore users address books consisted of no more than four or five contacts.

Friday morning, May 5th 2000. At about 10am we get a call on our support desk that one of our users had received a strange email and wanted one of us to pop up and have a look at it. A support technician was duly dispatched. He came running into the office about ten minutes later, his face white and ashen. He’d opened the email on the user’s PC – which turned out to be the ILOVEYOU worm – which immediately started to propagate. He has the presence of mind to disconnect the PC as soon as he realised (which was a very short amount of time), but it was still too late. Mailboxes had been infected, mail was spluffing out all over the place. The mail server was quickly disconnected, and we were mail-less for a while.

The poor support tech and I both learnt the hard way what a virus was that day. I spent the entire weekend rebuilding the email server from backup.

I learned several lessons that day. Lessons that I took forward for my entire career and into retirement. I consider myself lucky to have experienced that at a time when Windows updates weren’t readily available and the impact for us was really speaking negligible. A few people lost a few emails and that was it. The worm was supposed to delete files like .jpg’s and such on user PC’s, but all of our files weren’t locally held so the worm didn’t touch them.

From that day on however, I made sure that anything I was responsible for that ran Windows: servers, desktop PC’s, phone systems and the Microsoft software that sat on them: Exchange, Office etc. were kept up to date with Windows updates. Because it was a Microsoft thing, Microsoft implemented the Windows Update Server system, so we could manage all that sort of thing from a central on-site repository. I also made sure that they were running anti-virus software!

And then, eight years after that first virus incident, I found myself working in an IT department in a hospital. And this hospital had workstations and servers that were very old. They also had workstations and servers that we (the IT department) weren’t allowed to touch!

Third party systems and warranted environments

Hospital systems are many and varied. There are systems for X-Rays, for testing blood, for ophthalmology, for cancer treatment. There are also systems for patient administration (appointment booking, results for tests etc.). Most of these systems were third party, i.e. they were procured from an external company and you paid the external company to maintain it.

An example of this was the X-Ray system in the hospital, which was procured from Philips (yes, the lightbulb company). The servers, the storage, the desktop workstations, the monitors (to display patient X-Rays) and the X-Ray equipment were all supplied by Philips as part of the system procurement. The Hospital paid Philips a certain amount of money per year to maintain all this equipment and to fix it (24×7) should some of it become “unusable” (in other words: it broke!). Both of the main hospitals in the Acute Trust (10 miles apart, geographically) used the same system, linked together over a network link. Support from Philips (system upgrades, or fixes) could vary from a visit from the support engineers, or remote fixes using an external link. The servers ran Windows, the desktops ran windows.

This was the warranted system. For us in IT, we weren’t allowed to touch it at all. The Hospital paid a lot of money to Philips to maintain the system, Philips wanted the system to be exactly as they installed it (or maintained by their engineers) – with no external (i.e. from the IT department) influence whatsoever.

This was lovely, until (by accident) we (as in the IT department) discovered it didn’t run any anti-virus software, there were no operating system patches installed and the username and password was the same for every single user that used it. Hmm.


Network security

Fortunately, the chap that was in charge of the network engineers was a reasonable guy and would be open to discussing issues or things that we’d found in the course of our travails.

Unfortunately (but probably quite rightly), if we did find an issue and we wanted a firewall installed (or similar), we would have to make a case for it and prove it. Usually, this wasn’t difficult.

For example (returning to the third party suppliers and warranted environments for a minute) we discovered that the third party companies that supported the warranted environments remotely used an un-firewalled connection directly into the hospital. This would be either by ISDN dial-up, or with a bespoke leased private circuit. That was pointed out as being potentially “problematic” (as in a security issue), so discussions were had and firewalls installed. Third party connections were then restricted based on the access they needed (e.g. http or RDP access).

Outgoing internet access had been available for a while, and it was filtered. Unfortunately, the filter was a proxy server running on an old Windows server. The filter lists were never updated, until someone happened upon a malicious or forbidden website, when the URL would have to be blocked manually. We managed to get that “upgraded” to some Sophos appliances for outgoing filtering. “Upgraded”, because they were problematic, needed constant attention and were pretty much useless for 50% of the time (see [Office Life: Application Support]) In due course, other (more reliable) solutions were sought and implemented. Eventually. We had to go through many iterations before we got something usable.

External hospital properties (i.e. physical properties that were not based on the hospital campus) had ISDN or cheap VPN connections back to the hospital (these were called “TUT” boxes, manufactured by TUT Systems Inc – now long gone) and of course, they weren’t firewalled (or encrypted!). These were gradually incorporated into a wider hospital LAN by replacing those TUT modems with private leased lines.

I mentioned beforehand that the chap in charge of networking was a reasonable guy. That he was and he was also a sensible chap. Where there was a firewall requirement (on the internet link, for example) then he would have a pair of firewall installed for redundancy. Where there was a leased line to a major offsite property (such as a community hospitals, then he would install a lesser, but separately routed line for redundancy. Internal switches that connected the server racks and servers together, were always paired up – again for redundancy.

We did discover that you could access any switch or firewall (!) with the same username and password however. At the time, there was no internal certificate authority available (they’d never heard of such a thing!). So, passwords and usernames were changed to different ones manually, then graduating a bit later on to utilising RADIUS with individual access accounts.

Network-wise (in terms of switches and firewalls) they ended up pretty much OK quite quickly. Once we’d pointed out where the vulnerabilities were, the network engineers were happy to implement. This led to a series of firmware updates, switch patches and regular reviews of the engineer userbase, to disable old users and remove them etc.


User device security

By far, the hardest thing to implement ever, is user device security.

Users in this particular hospital environment were either clinical (doctors, nurses, surgeons etc.) or non-clinical (admin staff, booking office, managers etc.). They had a job to do, which wasn’t in IT. Most of them had no inkling about what security was and especially, why we needed it. Mainly because a) this wasn’t their job. They were Doctors, Nurse, Secretaries, retirees that came back on a part-time basis to help out in a department’s booking office and b) nobody had bothered to tell them.

You remember I mentioned Windows updates and patches earlier? Whilst it was very easy to implement server-side patches (after testing, of course) but implementing client-side desktop patching wasn’t anywhere near quite so easy.

Patch testing

Patch testing: let me explain what that was all about. When Microsoft issued a patch – this was usually once a month on something called “patch Tuesday” (the second Tuesday of every month), our Windows Update Services Server (in various iterations) would download it. In the early days of Windows updates, the server would download the patches centrally and then deploy them to servers immediately. The server would duly pick the patch up, install it and reboot.

The reboot schedules were set by group policy to overnight at 2am for most servers and all was good. Until one day it wasn’t. A patch (I can’t remember which one was the first, but I think it was an Exchange email one) rendered an email server non-functional.

Many head scratching hours later, we discovered that Microsoft were lying, when they said it was “safe”. That scenario (for different servers and for desktops as well!) was to be repeated a few times, over the course of my career.

This incident made us begin patch testing. Download the patch, install it on a test or a set of non-critical servers and desktops and see if it works. If it did, go for part deployment a couple of days later, then full deployment a few days after that. We did that for nineteen years and it is very time consuming! It saved us on many an occasion, however. Within a short amount of time of the very first patch going a bit haywire, a few internet based tech sites picked up on that and started to report their patch test finding on a monthly basis. That was very helpful indeed, but also: time consuming.

Of course patch testing was only a sample of the myriad of server configurations you had. What worked for one didn’t necessarily work for another.

Client-side patching

Updates for desktops were again controlled by group policy and of course they were tested prior to deployment (Office patches were notorious for going awry). You could set the install times, whether the desktop would automatically reboot (or not) and allow the user to defer if necessary.

When do you patch a client’s desktop PC?

  • When it’s on.
  • When it’s not in use.
  • At a specific time during the day (after consultation with the department head and issuing communication to the users).

When I worked in the “other” Trusts, this was quite easy to do. IT dictated (within reason) when the updates would be deployed and we would set them to automatically reboot if necessary and offer no choice to the user for deferral. That meant most of the desktops got updates at the requisite time and few were missed. The missed ones were queried and rectified.

Not quite so straight forward in a hospital, however. Clinical areas sometimes needed their desktops to stay on for the majority of the day, as they would be in use in clinics, reading patient details, results or (on the ones we were allowed to patch) there would be software that had to stay open.

The non-clinical areas were generally OK (most admin staff had lunch at 12, for example) so you could do those over lunch, or overnight.

However the hospital Trust board decreed that we had to give the users the choice to defer updates and to defer reboots. A bad choice, as it turned out, as many desktops remained unpatched. Some went unpatched for years until the policy was changed.

When does a policy change in the NHS? When something happens.

You could argue black and blue. Spell things out very, very clearly, or demonstrate the outcome of inaction. But you were still not allowed to change the update policy. The only time it changed was when something happened. Inevitably one day, something did. It wasn’t major for us, but somebody was relieved of a few thousand pounds (which they did get back, eventually). Fortunately (not for them, obviously) it was someone important that got scammed. It caused disruption for that important person and was enough to force a policy change.

The change meant we could impose mandatory reboots in certain areas. However, because of the lack of communication to the users, we couldn’t roll it out very quickly sigh.

Clinical areas were a bit more frustrating. Due to the third party support situation, we could only patch certain desktops. And then we had to ask users to leave their desktops on overnight (out of normal clinical working hours) so they could be patched and rebooted. We waved goodbye to our green policy and decided to patch instead.

The third party supported equipment

That still left the third party equipment: servers and desktops. We embarked on a long set of discussions with the third party manufacturers, citing all sorts of readily available examples of where equipment had fallen foul of scammers, hackers or other black hat actors, due to lack of patching, anti-virus software and security.

Going forward, we made it clear in the preliminary talks with third party suppliers when we were asked to scope new equipment, that patching and anti-virus software would be a requirement, not an option. The response from the suppliers was very varied. Some (like Philips, mentioned earlier) were happy to patch their systems (for an extra fee, of course), and would do so after testing. The bigger suppliers had test platforms themselves and would approve patches for their servers and desktop workstations accordingly. This process was lengthy, however and could take several weeks before a server was patched. Consequently, the patch levels – and therefore the security of the systems – was always behind.

Some suppliers were downright idiots. I recall a set of pathology machines that were connected to the network, however they were not allowed to have any patches, any anti-virus software (of any kind) and everyone used the same username and password. Any disruption to those machines could cause critical delays for bloodwork results, or any chemical pathology, so we were told in no uncertain terms to leave them alone.

It took a good deal of time to change their minds, again using real-life scenarios and cases from various sources. In the meantime, we sectioned off their little area of network and placed all of the machines behind a firewall. Network traffic in and out was controlled and filtered, down to the last packet. We insisted that individual usernames and passwords were issued to the users of the systems, so that at least there was an audit trail, should something go awry. It took a long time.


Summary

  • Windows updates quickly became a thing, as I got bitten by a bug!
  • Third party companies can be difficult and not security oriented.
  • Network security is very important. Luckily, that fact was recognised.
  • Test, test, test!
  • Server-side patches are relatively easy to deploy. Client-side, not so much!
  • Third-party company patching. Eventually.

What we’ve covered so far…

(Post 1) How an IT Department is Financed in the NHS

(Post 2) Windows updates
(Post 2) Network security
(Post 2) User device security