Jump to content
 

Cambrian Line Radio Signalling failure - RAIB investigating


Recommended Posts

  • RMweb Gold
2 hours ago, Zomboid said:

Would it actually be possible to run trains at 100 mph+ at 2-3 minute headways using mechanical signalling and absolute block?

 

Would be economically impossible if not technically.

Provided 3 aspect signals were used it would be possible (100mph+ requires 3 aspect signals) although TCB would be more likely than Absolute Block I'm sure.  As a sort of aside to this I regularly spend time waiting to join trains at a local junction station on the GWML where it is quite possible to work out (even if you hadn't seen it in operation - although i had in my youth) how the previous Absolute Block sections would have worked in relation to today's signalling and it is fascinating to see the computer at TVSC clear a particular signal with an approaching train in a exactly the same position it would have been in when the equivalent semaphore was cleared under Absolute Block working.

 

The current signalling is largely equivalent in signal spacing to that provided in 1961 and at that time it effectively created one extra block section over a distance of 5 miles although of course each signal section is now a block section in its own right so their number has actually increased considerably.  The headway the signalling provides is basically no different from what was specified for the first, 1961, installation of colour light signalling  and old timetables indicate that the Absolute Block sections could handle the same headway.  The significant difference is that train speeds are much higher than was specified for the 1961 resignalling.   Between Tilehurst (west of Reading) and Moreton Cutting (east of Didcot) the position of the 3 aspect signals fairly closely replicates the block sections and intermediate block sections they replaced in the 1960s although each signal is obviously now a distant signal for the one in advance so capacity is spread more evenly.   I saw the headway graphs when the signalling for that section was being re-specified in the early 1990s and it was decided there was no need to change any signal positions although some have moved a bit as a consequence of electrification works and the need to review clearances etc.

 

BUT, a very big but - it would be a very intensive workload for mechanical/local signalboxes with today's relatively continuous passage of trains at speed, they'd be sitting on the block all the time although the system would inevitably have to be TCB.

Link to post
Share on other sites

Some observations:

 

The Cambrian ETCS implementation was a pilot scheme. I would be very surprised if the installed system is to the latest hardware and software specifications.

 

ETCS gives you ATP as an integral part of the system. While TPWS is good (disclaimer: I have an interest as I led the team that developed the concept and had it agreed by DfT and the Railway Inspectorate - other members of this forum were on the team), TPWS cannot provide the same protection against the low probability but high consequence events that ATP prevents. If you are re-signalling you really want ATP.

 

ETCS is suitable for speeds above 125mph. It has been determined that lineside signals are not. ETCS would therefore allow some line speed increases on WCML/ECML.

 

Until the end of next month at least, fitting of ETCS is a legal requirement when re-signalling (as opposed to maintenance replacement). Even after that there will be a strong commercial case for following the ETCS standards. (Though there would be an opportunity for minor deviations to allow something like the ETCS level 3 lookalike that Bombardier has implemented outside the EU).

 

ETCS allows ATO which can give a more reliable timetable adherence in high density sections such as Thameslink core.

 

With ETCS 2/3 you could, in theory, abandon all lineside signals, track circuits and axle counters. In level 3 particularly these serve no purpose except as a fall-back if the ETCS fails. But given that ETCS hardware is mostly duplicated with hot standbys on board and at wayside, system failures are rare. (Bangkok Skytrain uses a specifically designed metro version of ETCS and has no signals except in the depots, with no main line track circuits or axle counters). Removing the lineside systems reduces failures and costs.

Edited by david.hill64
  • Like 1
  • Informative/Useful 2
Link to post
Share on other sites

  • RMweb Gold

David: your comment “if your resignalling you really want ATP” If that’s the case then why was it only limited to use on the GW and Chiltern main lines before being dropped in favour of TPWS and now to an extent ETCS
 

when I learned ATP (Chiltern) we were told it was dropped on the grounds of cost compared to TPWS and the system still in use is a cut back version of what was originally envisioned for the whole country, I.E the arming wire is only on fitted on approach to signals whereas it was originally planned to run continually in fitted areas which I should imagine would have been a huge expense in both fitting and maintainance 

 

I found ATP to be restrictive at times due to the short arming loops but it’s nothing compared to how limited ERTMS is when doing an unusual move or a possession train 

 

i suppose I’m in quite an unusual position to have been trained in ERTMS, RETB and ATP train working and can see the good, bad and similarities between each system as to be honest they are all very similar systems with both good and bad points 

Edited by big jim
  • Informative/Useful 3
Link to post
Share on other sites

3 hours ago, big jim said:

David: your comment “if your resignalling you really want ATP” If that’s the case then why was it only limited to use on the GW and Chiltern main lines before being dropped in favour of TPWS and now to an extent ETCS

 

Jim, the GW and Chiltern ATP schemes were pilots to try different ATP technologies as an overlay to the standard BR signalling. They were installed following a post-Clapham assurance by BR that ATP would be fitted (even though Clapham was not ATP-preventable). Railtrack redid the cost benefit analysis and showed that the costs of ATP as an overlay to the existing system were disproportionate to the benefits. Hence the development of TPWS which was intended as a short term interim measure pending roll-out of ETCS level 3, which was seen at the time to be about 5 years away (ie available by 2000). Well we all know what actually happened.  

 

With communications based systems such as ETCS level 2 and 3, ATP is essentially free (well, very little cost).  ATP does prevent the low frequency high consequence events that TPWS cannot prevent, such as a high speed SPAD where TPWS may not stop the train before the conflict point. I have worried for the last 20 years about an ATP preventable accident on a TPWS section. Though as it happens the benefit of TPWS is now though to be rather higher than that used in the original analysis.

 

If we were not to fit ATP when installing new signalling, in the event of an accident the consequences in court for those having made that decision would be dire, and rightly so.

 

I accept that ETCS does have operational restrictions and I expect with wider implementation more work-a-rounds will be found.

  • Agree 1
  • Informative/Useful 3
Link to post
Share on other sites

19 hours ago, Grovenor said:

The report is quite clear that the French version revised the software to store TSRs in non-volatile memory which solved the problem, but that change was not carried through into the Cambrian version. The oddity is that the RAIB are not recommending that change to be made. It should be relatively simple as the software already exists to be ported across.

 

I disagree.  Storing the TSRs that way would prevent the specific scenario which occurred in this incident.  It would not solve the underlying problem because I still see scenarios where changes made to the TSR set would not be activated on the system and the previous TSR set would persist unbeknown to the signalling staff.

Edited by DY444
Link to post
Share on other sites

  • RMweb Premium

"Upper or Lower Quadrant?"

Somersault of course, what else would you expect me to want?

Sorry to have distracted this thread, though it has been an interesting discussion.

More seriously, I worry considerably about the great efforts made in all walks of life to eliminate jobs using technology "to save money". If there are no jobs how will the population earn money and pay taxes?

And often all the technology does is introduce extra complication and the jobs do not disappear.

A non-railway example. Smiths started rolling out customer operated tills/payment terminals (I don't know the correct word). I remember talking to the shop staff in Brecon a good few months later, and they confirmed that it needed just as many till staff because of the number of shoppers who needed help with the tills. A Great way of saying money!

Anyway, rant over. Back on timetable please, with no floods.

Jonathan

  • Like 1
  • Agree 1
Link to post
Share on other sites

Concentration of signalling at fewer locations in order to save on staffing costs is nothing new; IIRC the GWR had one box controlling all the triangular junction between Bradford-on-Avon and Trowbridge ? The development of technology has simply meant that the process has been taken further than ever before, so that two Signallers now control the entire Cambrian network beyond Shrewsbury - How many staff would it take if the route was manually signalled ?

 

I do agree however that the technology and systems must be made 100% reliable, with back-ups; I was involved in far too many loss of signalling incidents as a Controller ! And just in the last few days major disruption was caused in the London Victoria and Glasgow Central areas due to signalling failures.

  • Agree 1
Link to post
Share on other sites

  • RMweb Gold
1 hour ago, DY444 said:

 

I disagree.  Storing the TSRs that way would prevent the specific scenario which occurred in this incident.  It would not solve the underlying problem because I still see scenarios where changes made to the TSR set would not be activated on the system and the previous TSR set would persist unbeknown to the signalling staff.

That is an interesting view and I can see that it could indeed apply.  yet again we come back to the differences between the way we have traditionally disseminated TROS information in Britain and indeed the mechanism for introducing on a route and the French approach.  It would strike me as more logical in ERTMS terms to follow the latter and presumably the fully functional version of ERTMS used in France would completely encapsulate SNCF TROS methodology as a logical progression forwards from the Livre Ligne principle by making it more comprehensive but in a different format - hence TROS information would naturally be included in the non-volatile memory

Link to post
Share on other sites

3 hours ago, DY444 said:

 

I disagree.  Storing the TSRs that way would prevent the specific scenario which occurred in this incident.  It would not solve the underlying problem because I still see scenarios where changes made to the TSR set would not be activated on the system and the previous TSR set would persist unbeknown to the signalling staff.

1 hour ago, The Stationmaster said:

That is an interesting view and I can see that it could indeed apply.  yet again we come back to the differences between the way we have traditionally disseminated TROS information in Britain and indeed the mechanism for introducing on a route and the French approach.  It would strike me as more logical in ERTMS terms to follow the latter and presumably the fully functional version of ERTMS used in France would completely encapsulate SNCF TROS methodology as a logical progression forwards from the Livre Ligne principle by making it more comprehensive but in a different format - hence TROS information would naturally be included in the non-volatile memory

 

I can't help thinking that the RAIB has not correctly reported the extent of differences in the SNCF version of the system.  The use of non-volatile memory gives you two copies of the information, one used to transmit data to trains and the other to present information to signalling staff.  Sometimes there are practical reasons for doing this but in general having two sources of truth is not a sound principle.  If you do go that way then you need to ensure you have foolproof mechanisms for keeping the data consistent or at the very least detecting that they are not, all of which adds complexity and more potential sources of design error.  The SNCF system must surely have such mechanisms or it is potentially no more sound than the Cambrian version.  In my view the architecture of the system on the Cambrian is the right one but they just did a bad job of implementing it.  As I said in my earlier post a design that allows an exception to silently terminate a critical thread is just plain bad in any kind of software.

Edited by DY444
Link to post
Share on other sites

  • RMweb Gold
27 minutes ago, DY444 said:

 

I can't help thinking that the RAIB has not correctly reported the extent of differences in the SNCF version of the system.  The use of non-volatile memory gives you two copies of the information, one used to transmit data to trains and the other to present information to signalling staff.  Sometimes there are practical reasons for doing this but in general having two sources of truth is not a sound principle.  If you do go that way then you need to ensure you have foolproof mechanisms for keeping the data consistent or at the very least detecting that they are not, all of which adds complexity and more potential sources of design error.  The SNCF system must surely have such mechanisms or it is potentially no more sound than the Cambrian version.  In my view the architecture of the system on the Cambrian is the right one but they just did a bad job of implementing it.  As I said in my earlier post a design that allows an exception to silently terminate a critical thread is just plain bad in any kind of software.

If SNCF has built its use of ERTMS from its existing working systems there should be a number of checks in it at various stages albeit it could - in typical SNCF fashion - be somewhat bureaucratic to say the least.  Having seen how they did some of their computerisation (in respect of timetabling, and going back to the 1990s here) they seem to have started very much from replicating the logic and checks they had in their manual system.  So I think that even where they are producing - as you say - two copies of the same thing the component parts of what goes into the 'master' copy is going to go through a considerable system of 'checking' beforehand and after it is produced.

 

Effectively as I would see ERTMS working in an SNCF situation it will be presenting any sort of line speed information in exactly the same way as a Livre Ligne because it will be carrying out (in that part of its function) exactly the same task as a Livre Ligne so that part if it will be checked (and probably double checked at least once) before it is issued because it is that to which a Driver drives.  It is, in some respects, the equivalent of a British Driver learning the road so it has to present all the relevant information - hence I can understand why it would include temporary speeds as well as permanent speeds.

Link to post
Share on other sites

  • RMweb Gold

This report is a few years old, but still relevant?

 

https://www.herefordtimes.com/news/11054498.BBC_radio_transmitter_disrupts_trains_between_Leominster_and_Ludlow/

 

And here is the Woofferton signalling in question -- still with the original GWR-designed immunity to RF interference. smile.gif

 

wooferton_1280x720_rp.jpg


Martin.

  • Like 1
  • Informative/Useful 2
Link to post
Share on other sites

  • RMweb Gold
1 hour ago, martin_wynne said:

This report is a few years old, but still relevant?

 

https://www.herefordtimes.com/news/11054498.BBC_radio_transmitter_disrupts_trains_between_Leominster_and_Ludlow/

 

And here is the Woofferton signalling in question -- still with the original GWR-designed immunity to RF interference. smile.gif

 

wooferton_1280x720_rp.jpg


Martin.

Similarly when the Cambrian RETB was being tested there were a lot of fuzzy TV pictures in the Dublin area.

Edited by TheSignalEngineer
  • Like 1
Link to post
Share on other sites

  • RMweb Premium
On 21/12/2019 at 10:22, caradoc said:

Concentration of signalling at fewer locations in order to save on staffing costs is nothing new; IIRC the GWR had one box controlling all the triangular junction between Bradford-on-Avon and Trowbridge ? The development of technology has simply meant that the process has been taken further than ever before, so that two Signallers now control the entire Cambrian network beyond Shrewsbury - How many staff would it take if the route was manually signalled ?

 

I do agree however that the technology and systems must be made 100% reliable, with back-ups; I was involved in far too many loss of signalling incidents as a Controller ! And just in the last few days major disruption was caused in the London Victoria and Glasgow Central areas due to signalling failures.

 

The Croydon shutdown was the fault of UK power Networks (aka the national Grid)

 

Please read this article because its a very good example of why fancy electronic systems are not as wonderful as the Whitehall Mandarins directing policy would have you believe.

 

https://www.networkrail.co.uk/running-the-railway/our-regions/southern/disruption-at-victoria-and-london-bridge/

  • Informative/Useful 3
Link to post
Share on other sites

3 hours ago, phil-b259 said:

 

The Croydon shutdown was the fault of UK power Networks (aka the national Grid)

 

Please read this article because its a very good example of why fancy electronic systems are not as wonderful as the Whitehall Mandarins directing policy would have you believe.

 

https://www.networkrail.co.uk/running-the-railway/our-regions/southern/disruption-at-victoria-and-london-bridge/

Interesting. As we rely more and more on intermittent sources for power generation, I wonder of we will need to change the thinking behind powering signalling systems on the mainline. In every metro signalling installation I have been involved with all signalling supplies are conditioned by a UPS system, usually fed from two independent sources and a diesel generator in addition to the battery back-up. A voltage spike wouldn't crash the system.

  • Like 2
Link to post
Share on other sites

  • RMweb Gold

I'm not up with current policy on power supplies but in the early days of my signalling career, pre-electronics except for a few discret component systems for ancillary equipment, the policy on major electrified lines was a local mains supply, a second supply derived from the incoming supply to traction and a diesel generator with a 30-second auto changeover. 

As equipment on smaller schemes became more electrically based and the power requirement outstripped a simple battery supply I was involved in developing inverter based standby supplies which could support a small signal box for several hours. Later as electronics became more widespread (and fussy about what they were connected to) we moved on into conditioned supplies which in theory at least should be continuous and at a constant voltage rather than the 10% tolerance used on a straight AC feeder.

  • Like 1
Link to post
Share on other sites

32 minutes ago, TheSignalEngineer said:

I'm not up with current policy on power supplies but in the early days of my signalling career, pre-electronics except for a few discret component systems for ancillary equipment, the policy on major electrified lines was a local mains supply, a second supply derived from the incoming supply to traction and a diesel generator with a 30-second auto changeover. 

As equipment on smaller schemes became more electrically based and the power requirement outstripped a simple battery supply I was involved in developing inverter based standby supplies which could support a small signal box for several hours. Later as electronics became more widespread (and fussy about what they were connected to) we moved on into conditioned supplies which in theory at least should be continuous and at a constant voltage rather than the 10% tolerance used on a straight AC feeder.

All very sensible, so I wonder why an over-voltage can bring down a whole area? Presumably this protection wasn’t yet fitted. 

Link to post
Share on other sites

12 hours ago, phil-b259 said:

 

The Croydon shutdown was the fault of UK power Networks (aka the national Grid)

 

Please read this article because its a very good example of why fancy electronic systems are not as wonderful as the Whitehall Mandarins directing policy would have you believe.

 

https://www.networkrail.co.uk/running-the-railway/our-regions/southern/disruption-at-victoria-and-london-bridge/

 

That's a really good response from NR, thanks for the link phil-b259. It looks as the issue on the railway side is the time taken to reset the equipment after a power surge, an interruption of an hour in such a busy area will always cause chaos.

 

  • Agree 1
Link to post
Share on other sites

  • RMweb Premium

LUL and Southern Railway used to generate their own power but moved to using the National Grid many years ago. A move that could have been a false economy in view of how often the power causes problems. 

Link to post
Share on other sites

  • RMweb Premium
9 hours ago, david.hill64 said:

Interesting. As we rely more and more on intermittent sources for power generation, I wonder of we will need to change the thinking behind powering signalling systems on the mainline. In every metro signalling installation I have been involved with all signalling supplies are conditioned by a UPS system, usually fed from two independent sources and a diesel generator in addition to the battery back-up. A voltage spike wouldn't crash the system.

 

Did you not read the article properly?

As the MD says there are at least 2 (if not 3 in some places) alternative power supplies available to power signalling kit.

 

Moreover the power WAS NOT LOST! (it was of the wrong specification)

 

1 hour ago, TheSignalEngineer said:

I'm not up with current policy on power supplies but in the early days of my signalling career, pre-electronics except for a few discret component systems for ancillary equipment, the policy on major electrified lines was a local mains supply, a second supply derived from the incoming supply to traction and a diesel generator with a 30-second auto changeover. 

As equipment on smaller schemes became more electrically based and the power requirement outstripped a simple battery supply I was involved in developing inverter based standby supplies which could support a small signal box for several hours. Later as electronics became more widespread (and fussy about what they were connected to) we moved on into conditioned supplies which in theory at least should be continuous and at a constant voltage rather than the 10% tolerance used on a straight AC feeder.

 

Did you not read the article properly?

As the MD says there are at least 2 (if not 3 in some places) alternative power supplies available to power signalling kit.

 

Moreover the power WAS NOT LOST! (it was of the wrong specification)

 

The over voltage spike lasted for 20 seconds so is well within your '30 second' power interruption time

 

1 hour ago, david.hill64 said:

All very sensible, so I wonder why an over-voltage can bring down a whole area? Presumably this protection wasn’t yet fitted. 

 

As the MD says, the electronics shut down to prevent the equipment being damaged. had the various TDM crates been damaged (a real possibility had they not turned themselves off) then you would have been looking at days before the signalling was restored - not hours!

 

Yes it may well be possible to improve the resilience of the internal power supplies within the TDM systems etc so they can cope - but as with all manufactures, there is an assumption that the national grid will do their job properly.

 

Please note that the massive power cut a few months back that left 700s stranded all over the place was a deliberate action so as to preserve the National Grid frequency at 50HZ. While the tolerances for voltage drift are grater, when exceeded they were power should have been cut to the affected areas. That may have ensured signalling was retained (or that the outage was brief as the TDMs will reboot themselves if power is lost then restored.

Link to post
Share on other sites

  • RMweb Premium
56 minutes ago, Chris116 said:

LUL and Southern Railway used to generate their own power but moved to using the National Grid many years ago. A move that could have been a false economy in view of how often the power causes problems. 

 

As far as the Southern Railway goes, they only generated their own power for the SW inner suburban routes!

 

The LBSCRs overheads purchased power form an external company and under the SR the same was true for the conversions and the SE suburban routes.

 

All the mainline extensions to the coast  from 1932 had power supplied from the newly set up National Grid.

Link to post
Share on other sites

  • RMweb Premium
1 hour ago, caradoc said:

 

That's a really good response from NR, thanks for the link phil-b259. It looks as the issue on the railway side is the time taken to reset the equipment after a power surge, an interruption of an hour in such a busy area will always cause chaos.

 

 

Indeed it was.

 

Although relatively close together traffic congestion in that part of South London will make it hard for the techs to get between sites quickly.

 

As it happens when the failure occurred the East Croydon team were down at East Grinstead (they look after that line for faulting + Maintenance, though I understand the folks from Streatham depot were closer.

 

Yes you could say that the teams should have been kept at the depots while the evening peak was on - but the accountants don't like that as its 'wasting resources' so there is an expectation that folk will be carrying out non service disrupting maintenance tasks instead.

 

As with most things its a trade off - and seeing as the route management are quite happy to have all three Sussex Outer faulting teams engaged on p-way work overnight and simply cross their fingers no big faults come in, having folk sitting round at Croydon 'just in case' is not going to happen long term.

Link to post
Share on other sites

The fact is that these kind of catastrophic failures are very rare, so it's not a good use of resource to have people sitting around waiting for the one or two times a year that they might be useful.

 

I don't know how multi skilled the maintenance S&T teams are, but in E&P there are several specialisms and it's impossible to be an expert in all of them.

  • Agree 1
Link to post
Share on other sites

4 hours ago, phil-b259 said:

 

Did you not read the article properly?

As the MD says there are at least 2 (if not 3 in some places) alternative power supplies available to power signalling kit.

 

Moreover the power WAS NOT LOST! (it was of the wrong specification)

 

 

Did you not read the article properly?

As the MD says there are at least 2 (if not 3 in some places) alternative power supplies available to power signalling kit.

 

Moreover the power WAS NOT LOST! (it was of the wrong specification)

 

The over voltage spike lasted for 20 seconds so is well within your '30 second' power interruption time

 

 

As the MD says, the electronics shut down to prevent the equipment being damaged. had the various TDM crates been damaged (a real possibility had they not turned themselves off) then you would have been looking at days before the signalling was restored - not hours!

 

Yes it may well be possible to improve the resilience of the internal power supplies within the TDM systems etc so they can cope - but as with all manufactures, there is an assumption that the national grid will do their job properly.

 

Please note that the massive power cut a few months back that left 700s stranded all over the place was a deliberate action so as to preserve the National Grid frequency at 50HZ. While the tolerances for voltage drift are grater, when exceeded they were power should have been cut to the affected areas. That may have ensured signalling was retained (or that the outage was brief as the TDMs will reboot themselves if power is lost then restored.

Don't get shirty! 

I did read the article, which I though was constructive and informative.

Clearly all of the available power supplies had the same problem, which isn't surprising if they are all from the grid. My point is that if we can no longer rely on stable power supplies within the correct specification, then the industry will have to spend money to compensate for the deficiencies of our new generation system.

Yes the system shut itself down to protect itself, but it would be more sensible for the protection to be upstream of the conditioning/UPS/secondary source so that the UPS kicks in when the power is out of spec. It may not have been necessary before but recent events are likely to be the precursors of the future so thinking will have to change. Working in countries where mains power supply isn't as good as that which we (used to) have means that the protection on essential circuits is arguably better than we have. If you had read my comment properly instead of getting in a huff I think you would have understood that I was getting at the grid not at NR.

Link to post
Share on other sites

  • RMweb Premium

"but as with all manufacturers, there is an assumption that the national grid will do their job properly"

Is that not the issue? Can we make that assumption as we move to more diverse power sources?

I lived for six years in Kosova. In the winter it was normal to have rota power cuts because of a lack of capacity. One learned to cope. We should be preparing to do the same if we want to rely on sensitive electronics, especially since over the past few years the reserve in the generating system has been cut to a fraction of what it was in CEGB days.

But a warning. When I was working I knew the chief engineer of one of the big property companies. One day he found that his car, in the company underground car park at HQ, had a flat battery, No problem, Just charge it from the starter battery for the building's standby generator. But that was flat too. As he said to me, if he couldn't ensure that the system he was responsible for was actually operable in his own office what hope was there of every such installation around the company's estate being in working order?

And I agree that the protection should be such as to protect the whole installation rather than some way down the equipment chain.

BTW what has all this got to do with the Cambrian line other than the fact that its signalling is electronic and therefore susceptible to such things?

Jonathan

  • Agree 1
Link to post
Share on other sites

  • RMweb Premium
2 hours ago, david.hill64 said:

Don't get shirty! 

I did read the article, which I though was constructive and informative.

Clearly all of the available power supplies had the same problem, which isn't surprising if they are all from the grid. My point is that if we can no longer rely on stable power supplies within the correct specification, then the industry will have to spend money to compensate for the deficiencies of our new generation system.

Yes the system shut itself down to protect itself, but it would be more sensible for the protection to be upstream of the conditioning/UPS/secondary source so that the UPS kicks in when the power is out of spec. It may not have been necessary before but recent events are likely to be the precursors of the future so thinking will have to change. Working in countries where mains power supply isn't as good as that which we (used to) have means that the protection on essential circuits is arguably better than we have. If you had read my comment properly instead of getting in a huff I think you would have understood that I was getting at the grid not at NR.

 

As far as I am aware UPS systems (railway and non railway) are designed to kick in with the loss of power below the standard tolerances (which may of course happen during some types of 'power spikes') - not prolonged voltage spikes above the maximum National Grid tolerances. That is why I 'got shirty' as you put it.

 

Traditional relay based equipment is quite tolerant of overvoltages (though this many affect the lifespan of rely should they get too warm) but anything computer based isn't - which is why a prompt shutdown is built in.

 

It is of course possible to construct advanced power supply systems that would be able maintain supplies in the event of an over voltage input  - but these are not 'off the shelf' items as it were.

 

Agreed that if the National Grid is unable to deliver what it is supposed too (i.e. either voltage + frequency within tolerance) or cut the power immediately when such events occur) then NR and others are going to need to invest in suitable power management equipment to do that.

Edited by phil-b259
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...