Keeping up my Halloween tradition of talking about things that went bump in the night, here are a few more oops moments that I learned from. Maybe they will help you avoid similar mistakes. Read more…
In crisis or incident management there is a lot that can go wrong. One outfit that I worked for had a crisis management manual that was spilling over into a third 4 inch thick ring binder. Yes it was well researched and worked well for desktop exercises, but how are you going to work with that when you are stuck out in the car park in the wet and the wind trying to sort out which page you need?
One of the big problems with thinking about what disaster might befall you is that you go down the input specification route; you plan for all sorts of things that might happen when many of them have the same two or three results and they are that you can’t use all or part of the site, or all or part of its services.
My contention is that it doesn’t matter that much why you have the problem. That just gives you a clue as to how long you have the problem for, for example if you have a gas leak outside the site and you can’t get in (or get evacuated) you can’t use the building for a few hours, but if you have a fire it will be a few days disruption to, possibly, having to move to new premises. In both cases it is the loss of use that needs priority.
All of the functional groups within the building will have their own continuity plans and the FM team need to be aware of these and support as necessary, but it is the FM team that will take most of the early actions in managing the incident.
In these pages you’ll find stories of some of the major incidents that I’ve been involved with. In The Day The Town Stood Still it was a pretty routine day when something came up, and that then escalated to a point where the improbable coincidence of a second problem brought us close to the edge of a disaster. If the team at the second site had not been effective in dealing with the flash fire, the gridlock caused by the first problem preventing the Fire Brigade from getting through might have seen us lose a building. There is a very fine line between OK and Oh S**t! sometimes.
Does fortune play a part? Maybe it does; there are times when timing or nature will be on your side, but mostly it is thinking, training and practice that will make the difference. If you’ve thought things through, planned and prepared through getting people trained and have drilled them then most of the risks are mitigated or reduced.
But to finish off this series with a final foul up, I’ll tell you about the one that really got me into FM. At the time I headed up the Operational side of a logistics business and the property maintenance team worked for HR. We had a problem with the flashing that covered the join between the wall and roof of the warehouse above the goods inwards doors and a decent repair was budgeted for.
I arrived one morning to find a queue of lorries outside. The cause was obvious; scaffolding completely blocked access to goods in and our operations were paralysed. It cost us dearly, but was easy to put right. The cause was poor communication; no-one had bothered to consider that we needed to work through the repairs. Facilities came under my control from then on so that there would be no more such incidents. and led to me making the move to FM myself.
Continuing in the run up to Halloween with tales of things that went wrong, this week we turn to a bit of a farce that we enjoyed along with our friends in Information Technology.
One site I inherited when I moved from Logistics to Facilities Management was a multi story office block that was almost wholly occupied by IT people and was one of two main centres for that trade. The building was also one of the main hubs for the company’s data network and, as such, was something of a sacrosanct site.
The FM work there had been part of the IT team, but we had inherited those people along with the site. They knew their job and they knew their building but, until we arrived, they had never had a ring fenced budget and, every year, something had been lopped off to fund IT project overspends.
As we dug deeper into the backlog of maintenance one thing that I had placed on the high priority list was the emergency backup generator system. This was a thing of legend at the site and beyond; “They have a backup generator for their backup generator” people would tell you around the company in terms of some awe. The generator room in the basement had taken on qualities that might have been employed for a shrine, and the full time engineer that they had taken on to maintain the system played the role of high priest to the hilt.
Access to the room was something of a privilege, but my regional maintenance manager and I were reluctantly granted an entrance on the basis that we were now in charge. The room was pretty spotless and the two engines, one a Gardener and the other a Rolls Royce (no less) gleamed on the plinths.
The system was explained patiently to us. In the event of a power failure there was a battery backup that would allow a few minutes of power while the Gardener engine kicked in. If, for any reason that failed to fire up the Rolls Royce would deploy itself and, in the event of a long term power outage, the engines could be run alternately to keep the data flowing.
But it had never been tested. Yes, there was a switch that allowed a simulation power cut to see if these beauties would kick in and that was tried annually, but the overall system had never been tested. So I announced that we would, and requested a date when it would be convenient for us to do so.
The entire IT hierarchy were appalled and the ranks massed to oppose this folly, but in the end we got our way. We put in a bypass power source from the main switch so that the building would not actually lose power and threw the switch on the original circuit to make the generator room think that the mains had gone off.
The battery back didn’t work. It didn’t even have power enough to start the generator let alone support the building. But we had also found when we installed the by-pass that two thirds of the building, including a pair of new computer rooms, were already by-passing the backup system because corners had been cut in funding projects.
We found the money to put things right, but the backup myth died. These things have an importance at their own time, but times move on. We put a lot into that building to prepare it for the 21st century, but it has gone now, replaced by an apartment complex. Happy memories though!
We tend to talk about the things that we’ve done well, but we learn more from the things that go wrong, so with Halloween approaching , and in the spirit of things that go bump in the night, maybe it’s a good time to look at a project that went wrong. And so here’s a skeleton from my closet.
The project was to replace the water storage facility for a substantial sprinkler system. To repair it was a difficult job and would have taken the system out of action for at least 8 weeks which was not acceptable to the client or their insurers and there was also a desire to expand the system which would have required additional capacity. On that basis we elected to go for new storage which gave us the option of repairing the original one at our leisure should it be needed in the future.
In working through the options open to us the most economical way forward was to install a pair of cylindrical tanks about 50 metres from the original installation where we had an available piece of ground that would require little preparation to accept them. An appropriate engineering contractor was engaged to design the system and provide us with a specification that we could put out to tender and it was during this exercise that we made a mistake in communication, although no-one realised until much later. We had our own mechanical and electrical team and had given them the lead in working with the design engineer. When the subject of connecting an appropriate power supply to power the pumps came up, our man said that we would do that and this was true; we would do the connection at the panel. We meant the one in the nearest building; he meant the one in the new pump house.
Specification done we went out to tender. There were not too many companies capable of a job of that size so we short listed three for the final stage and had them all in on the same day for the site inspection and a question and answer session. At some point the power supply question came up and the answer was given “Client is arranging connection” by the design engineer. No-one on my team queried that because we had no reason to.
At the time our biggest issue was getting planning permission for an installation that would be partially visible to residential neighbours, many of whom were openly hostile to the site and we were into the games that one plays in these circumstances and were happy that we got through that stage with the decision that we wanted.
A contract was placed for just over £100k. It was not a hugely disruptive project because of the site that we had chosen and work proceeded quickly. At about two thirds of the way through I took a walk around with the contractor. Both tanks were substantially complete and the pump house was up and being fitted out. Laying the power cable from the pump house to the nearest building would involve digging up the road causing my occupiers possible disruption so I asked when that was scheduled for.
“But you’re doing the connection” he said, and the misunderstanding back at the start of the design stage began to emerge. Our spec did not allow for cutting and filling a trench to bridge the 50m gap and it cost us £10k to do it. All because of an ambiguity in the spec: Always read the small print, especially if you wrote it yourself.