Network

Network June 2017

Issue link: https://fhpublishing.uberflip.com/i/831507

Contents of this Issue

Navigation

Page 20 of 31

NETWORK / 21 / JUNE 2017 the fault, then repair teams would come out to fix it, so it took hours and hours for some faults to be repaired. In the Midlands in April 2011 their target was 60%. The one thing we did was move that 20% in under 12 months by focusing on staff and how they could deliver in the field. Those guys in the field know what to do and were only held back by bureaucracy. There were layers of management above them, which made it difficult. So now it's very simple: we send three people to each fault. The whole idea was to push that absolutely clear message, and if managers wanted to send more people they could. The change was very rapid. In two to three months it cleared the way to allow people to deliver what they can. The key thing is that with the same people, the same field guys, no new resources brought in, we could make the changes we needed to make. If we outperform the reliability benchmark, there are financial incentives. Has innovation, maintenance or automation been the bigger driver for increasing reliability? Innovation is in everything we do. It is not purely about technological solutions. Our initial idea of Target 60 was innovation: review every HV circuit to determine what switching you would do in the event of a fault ahead of it happening. Combine this with a simple and clear message to staff: send a minimum of three HV switchers and prioritise customer restoration. In the Midlands, we applied the predetermined switching points and empowered staff to deliver. The result was a 10% improvement in three months and a 20% improvement within 12 months. During this period, Midlands staff also identified switchgear with 'restricted use' due to maintenance backlogs – these restrictions increase the number of restoration stages required and delay restoration. We empowered the new geographic- based teams (the same field staff) to fix the problems. They created a programme of remedial maintenance, they measured delivery and they cleared those longstanding issues within the first 12 months, so the kit could be used as originally intended. More innovation has come with the application of additional protected zones combined with computer schemes that apply simple logic to known data to apply safe restoration strategies. So to answer your question, it's a combination of innovation (thinking, not just technology) and good maintenance programmes but also staff knowing what needs to be done and wanting to do the right things. We now restore over 90% of WPD customers on HV faults in less than an hour. How has WPD used technology to increase the reliability figures? Part of what we did was to put in more switches or protection zones, taking pieces of the network and cutting it up into smaller slices. What we have now is telecoms on the switches so we can check it from our two control rooms. We have added devices based on fault performance and the number of customers on a circuit. Also, we had various sensors in the network to get information, so we wrote algorithms to do certain things. It's not AI, but simple logic. It's innovation that we applied some years ago, but now apply in a slightly more sophisticated way depending on the fault. The algorithm will put the highest number of customers on as quickly as possible and assess what has to be done to restore the power in a safe way. We are utilising a simple technology to maximum effect. In the Midlands a lot of outstanding maintenance prevented full use of installed equipment, so we started a tech programme to find and fix issues based on a hierarchical system and type defect. How has HV system automation changed how fast WPD restores power? Remote control devices have been around for some time, but smarter use of this equipment with additional network sensing enables supplies to be rerouted quickly and safely without having to send a person out. Restoration is o—en initiated via computer algorithms. Could you explain a little more about how WPD uses generators to restore power, and stepping up power from LV to HV? Our clear objective is to restore supplies quickly, but you oen have a section of network with customers who cannot be restored until the fault is actually fixed. For these customers the consideration is then whether generators should be used to restore supply and how quickly this can be done, versus the likely time to complete that repair. In the Midlands they were using external service providers exclusively and had no generation of their own. Service standards were poor and the decision-making rules for staff to use generators were so restrictive they were never used for most faults. They had almost 20,000 customers a year who remained off for more than 12 hours on faults. We ordered £3m worth of generator sets in our second month of ownership and empowered teams to use them. The WPD model is to own our own generation capability and use external service providers to support this as necessary. We reduced the number of customers off supply over 12 hours to under 10,500 within the first year. It is now under 70 a year. What are the main causes of faults on WPD's network? Reasons can vary. A cable can perform okay for 60 years then prolonged extremes of weather such as cold and drought or particularly long spells of rain can cause ground movement. As soon as that happens, any latent weakness or previous damage in the cable could be found, and a fault can occur. What is the next big challenge for WPD? The electricity industry used to be fairly static, but the future is really exciting. I think there's going to be a transformational change over the next 10 years. I'm sure the requirements of decarbonisation, particularly for energy, transport and heat, and how that evolves over time will be both a challenging and really exciting time. We expect DNOs to evolve to DSOs – it's a message we've embraced. A lot of innovation projects have that in mind particularly with distributed generation and other distributed energy resources such as storage and frequency response. So we are putting a lot of enabling steps in that will allow us to start acting as a DSO in the short term. We are partly there now, and over the next year or two will develop as a DSO, and evolve further as we go through RIIO-ED1. N "Engineering is important, but our business is about serving customers. Target 60 let us change the mindset in the business to 'restore first, fix second'."

Articles in this issue

Archives of this issue

view archives of Network - Network June 2017