Jump to content
 

E.R.T.M.S.


lmsforever
 Share

Recommended Posts

2 minutes ago, Purnu said:

Hi Simon,

 

Out of curiosity does the ERTMS give Controllers and Drivers the option to accept or change the Brake Retardation rate as well as the adhesion factor?

 

Yes and No.

 

The braking curves are constantly recalculated against the inputted train data, but I assume that the changing of the Train Braking Percentage data by the driver would change the braking curve. Of course in a fixed multiple unit, there would be no need to change this data.

 

Simon

  • Thanks 1
  • Interesting/Thought-provoking 1
Link to post
Share on other sites

35 minutes ago, corneliuslundie said:

An interesting thought about retaining some lineside signals in case of ERTMS system failure for whatever reason.

But since most lineside signals are now operated from many miles away over equally vulnerable IT systems would it be any advantage? I am awaiting the day when a fire in a signalling centre knocks out a few hundred miles of a main line - well not hoping for it but you know what I mean.

Bring back those metal thingies which were moved using wires!

Jonathan

PS Like we still have on the Marches line.

 

Hi Jonathan,

 

Leaving lineside signals in alongside ETCS is, as P2R are finding out, a complete pain. From a cost point of view, it just doubles the cost as you still have all the equipment that you would normal remove with ETCS without signals. From a failure stand point, ETCS without signals means there is much less to go wrong. The control system, interlocking and RBC are highly reliable and in reality rarely fail, Balises can, in reality, really only 'fail' if someone walks off with one, and the on train equipment is high reliable. 

 

I know it is very convenient for people to say that ROCs are problematic because it is 'putting all your eggs in one basket', but the reality is that whilst there have been some major delays caused by ROCs being evacuated / power failures, these few and far between. ROCs are engineered to have highly effective fire suppression systems and back up power supply so such incidents are rare. I know that in York ROC (I think it's York, or it maybe Leeds!) that the interlockings for the various areas are in different rooms with individual power supplies so that should some happen in one room it doesn't spread through everything.

 

However, should anything dreadful happen like a fire, then (and yes, I know that this is very crude and basic and the reality is lot more complex) to replace a CBI interlocking or it is a matter of getting a  'blank' interlocking, putting the data back in and hook it up to the lineside. It's not as a huge amount of work as re-wiring a relay interlocking or casting a mechanical frame*.

 

Simon

 

*I'm sure some is going to explain how wrong I am now 😀

  • Interesting/Thought-provoking 1
Link to post
Share on other sites

2 hours ago, big jim said:

There is a shunt mode available but every time you change ends on a 97 you have to input your data again which was frustrating and time consuming

 

Hi Jim,

 

I've just looked this up in Subset 26 and when changing ends, the 'Start of Mission' procedure is trigged in the cab being opened, but the EVC should remember the data that you inputted, as along as you didn't end the mission before closing the 'old' cab. I assume that the 97s EVC isn't configured to remember the data when a cab is closed as it isn't Baseline 3 compliant.

 

This is only true when you are using a single loco, if you are running round using two locos, the driver data will have to be inputted when you open up the cab as the data can't be transferred from one loco to the other. HOWEVER, you can input your driver ID in the second loco before you set off and it will remember it.

 

Simon

Edited by St. Simon
  • Informative/Useful 2
  • Interesting/Thought-provoking 1
Link to post
Share on other sites

  • RMweb Gold

I’ll be honest and say I’ve not done ertms for a good 5 years now so it’s all a bit foggy memory wise!

 

Truth be told I’ve tried to block it from my memory! 
 

I may be having a trip down there in July with the weed sprayer, not driving obviously as I’m out of date for both ERTMS and the route but I’m sure it will come back to me when I see the driver using it again 

 

regards mixing signalling, Machynlleth is like that and that was the bit of the system that I really didn’t like, going from position lights to block markers, working ‘on sight’ combined with shunting there too was the one place I thought I’d mess up, I’m sure I’d done it more regularly than I did I’d would be as natural as driving under conventional signalling

 

 

Edited by big jim
  • Informative/Useful 1
  • Interesting/Thought-provoking 2
Link to post
Share on other sites

  • RMweb Premium

For both ERTMS and signalling centres, it was not things like fires I was thinking of but communications failures. It would only be necessary for someone to hack the communications system and there could be major problems, even accidents if the hackers knew what they were doing and wanted to cause accidents. And hacking is becoming all too common.

My comment about retaining lineside signals was not meant to suggest that UI think it is a good idea. Sorry if i gave that impression.

Jonathan

Link to post
Share on other sites

2 hours ago, corneliuslundie said:

For both ERTMS and signalling centres, it was not things like fires I was thinking of but communications failures. It would only be necessary for someone to hack the communications system and there could be major problems, even accidents if the hackers knew what they were doing and wanted to cause accidents. And hacking is becoming all too common.

My comment about retaining lineside signals was not meant to suggest that UI think it is a good idea. Sorry if i gave that impression.

Jonathan


Hi Jonathan,

 

ETCS communication is extremely secure to counteract any hacking attempts, everything is timestamped and has authentication messages built in in both directions. The way the system works also requires Balises messages, which certainly can’t be hacked and changed remotely.

 

I know the military have tried to hack ETCS communications as a test (A friend in the RAF just so happened to be involved) and were unsuccessful.

 

The interlockings are also very secure, they are a closed system and use a custom code language. 
 

Even so, the rail industry is doing to a lot to constantly improve Cyber Security.


Simon

Edited by St. Simon
  • Thanks 1
  • Informative/Useful 1
Link to post
Share on other sites

I'm not familiar with the details for ETCS, but I know the SSI systems from the late 1980s employed digital coding of the data streams to ensure that controls and indications wouldn't be acted on if they were corrupted or, for example, the cable to the trackside was inadvertently connected to the wrong interlocking cabinet.  That's a much higher level of protection than the older relay interlockings, where most of the trackside cables just carry simple unencoded currents (although the short tail cables from the trackside modules to the actual equipment still do this).  It's similar in principle to the sort of coding that ensures that credit card transactions can be securely processed when the reader is only connected by non-secure wifi and landlines.  

  • Like 1
Link to post
Share on other sites

15 hours ago, corneliuslundie said:

For both ERTMS and signalling centres, it was not things like fires I was thinking of but communications failures. It would only be necessary for someone to hack the communications system and there could be major problems, even accidents if the hackers knew what they were doing and wanted to cause accidents. And hacking is becoming all too common.

My comment about retaining lineside signals was not meant to suggest that UI think it is a good idea. Sorry if i gave that impression.

Jonathan

 

You can't hack something that has no external connection to a public network without getting physical access to the equipment (note the equipment - not the interconnecting network cables).  Hollywood would have you believe that anyone can hack into anything from anywhere - that's nonsense.  Hacking, with very few exceptions, is only feasible because of lax IT security practices and/or social engineering.

 

There have been fire alarms, security alerts, power failures and staff shortages/illness which have closed signalling centres for a few hours but power boxes controlling large areas have been around since the early 1960s, and, if there has been a fire which caused the extended loss of such a facility then I don't remember it.  Indeed the last signal box fire I can recall which caused a major problem for an extended period was Cannon St in the late 50s.  That aside the biggest risk is cable routing and record data being wrong or missing, no duplicated and diversely routed cables (see London Bridge cable fire in the 1980s), or duplicated cables supposedly diversely routed being in the same cable run somewhere by mistake. 

Edited by DY444
  • Like 1
  • Agree 3
Link to post
Share on other sites

  • RMweb Premium

So how did Anonymous manage to hack the Russian public TV system?

And there are surely vital radio links in the system. Otherwise why all those masts on railway property (which can also get felled in storms)?

Sorry, not convinced.

Jonathan

Link to post
Share on other sites

  • RMweb Premium
1 hour ago, corneliuslundie said:

So how did Anonymous manage to hack the Russian public TV system?

And there are surely vital radio links in the system. Otherwise why all those masts on railway property (which can also get felled in storms)?

Sorry, not convinced.

Jonathan

Radio links and cables are only the "transport medium" for the various data systems. The security/coding/one-off algorithms etc. is part and parcel of the "end equipment/systems". 

  • Like 2
  • Agree 1
Link to post
Share on other sites

3 hours ago, corneliuslundie said:

So how did Anonymous manage to hack the Russian public TV system?

And there are surely vital radio links in the system. Otherwise why all those masts on railway property (which can also get felled in storms)?

Sorry, not convinced.

Jonathan

My understanding is that they hacked the TV guides that appear on smart TVs to show their own message instead of the programme information.  These will almost certainly have a gateway to the internet so programme providers can log in and provide the details that are to be broadcast.  Such systems are able to be hacked, most likely by someone getting hold of the password. 

 

The are also not "vital" systems in the engineering sense - it's unlikely anyone will die if the TV programme fails.  Railway signalling systems are built to much higher integrity and separated from public access in the ways described by others.  They are also protected by interlockings which only respond to requested actions if safe to do so, so even if someone got access to the control system it would be extremely unlikely that they could cause a derailment or collision.  

  • Like 1
  • Informative/Useful 2
Link to post
Share on other sites

Hi,

 

Following on from NR's HST Power Cars, a Grand Central Class 180 has now been successfully tested on E.T.C.S. Level 2:

 

Network Rail's RIDC Tests Its First Retrofitted ETCS Train | Railway-News

 

I don't quite understand the reference to Packet 44 allowing mph over km/h, Packet 44 is all about different train functions that aren't in the 'core' E.T.C.S. data rather than mph conversion and has been in use on the network for years for APCO, ASDO, ABDO, CSDE and TASS.

 

I also find it slightly funny that a Class 180 is the first to be fitted, given how unreliable they once were!

 

Simon

  • Agree 1
  • Informative/Useful 1
  • Interesting/Thought-provoking 1
Link to post
Share on other sites

1 hour ago, St. Simon said:

Hi,

 

Following on from NR's HST Power Cars, a Grand Central Class 180 has now been successfully tested on E.T.C.S. Level 2:

 

Network Rail's RIDC Tests Its First Retrofitted ETCS Train | Railway-News

 

I don't quite understand the reference to Packet 44 allowing mph over km/h, Packet 44 is all about different train functions that aren't in the 'core' E.T.C.S. data rather than mph conversion and has been in use on the network for years for APCO, ASDO, ABDO, CSDE and TASS.

 

I also find it slightly funny that a Class 180 is the first to be fitted, given how unreliable they once were!

 

Simon

As i posted on here on 11th April.

 

Link to post
Share on other sites

12 minutes ago, ess1uk said:

As i posted on here on 11th April.

 

 

Ah, sorry, I thought your post was that the fitting had been started, but having re-read it, you do say it is being used, apologies!

 

Simon

  • Friendly/supportive 1
Link to post
Share on other sites

On 17/05/2022 at 08:26, corneliuslundie said:

So how did Anonymous manage to hack the Russian public TV system?

And there are surely vital radio links in the system. Otherwise why all those masts on railway property (which can also get felled in storms)?

Sorry, not convinced.

Jonathan

 

They didn't.  They hacked the TV guides which are on publicly accessible networks. 

 

Hacking and failure of the transmission systems are wholly different things. 

 

A radio tower falling over is analogous to someone cutting lineside cables.  A ruddy nuisance, disruptive and often time consuming and expensive to fix.  In itself though it presents no danger to traffic (except through a train hitting a fallen tower if it falls onto the line). 

 

By contrast, hacking is the unauthorised taking over of control of a system invariably for nefarious purposes.  That is next to impossible by tapping into the transmission medium because of encryption - that's why you need access to the end equipment where the signal is encrypted or decrypted.  There's also the not trivial issue with cables of accurately identifying the correct cores.  The only circumstance where it is potentially possible is if the signal is not encrypted but that would be wholly negligent for any kind of signal that matters.   

  • Like 2
  • Thanks 1
Link to post
Share on other sites

On 17/05/2022 at 08:14, DY444 said:

 

There have been fire alarms, security alerts, power failures and staff shortages/illness which have closed signalling centres for a few hours but power boxes controlling large areas have been around since the early 1960s, and, if there has been a fire which caused the extended loss of such a facility then I don't remember it. 

Depends what you mean by extended, but the fire brigase had to tackle a fire a while back in a building near the former Kings Cross box.  Because there were acetylene cylinders on that site, the box had to be evacuated for a couple of days, causing severe disruption until they manned all the emergency NX panels scattered around the district  - and that took a lot more signalmen than Kings Cross had.

  • Like 1
  • Agree 1
Link to post
Share on other sites

10 hours ago, Michael Hodgson said:

Depends what you mean by extended, but the fire brigase had to tackle a fire a while back in a building near the former Kings Cross box.  Because there were acetylene cylinders on that site, the box had to be evacuated for a couple of days, causing severe disruption until they manned all the emergency NX panels scattered around the district  - and that took a lot more signalmen than Kings Cross had.

 

Ok fair point but it seems to me that significant disruption would still be caused by such an incident occurring somewhere near the line but nowhere near the box, between Bounds Green depot and Finsbury Park for example.  I accept that the disruption would not be as bad in such a scenario as the northern end of the KX box area would still be useable but it would nevertheless still be bad, particularly for long distance services. 

  • Like 1
Link to post
Share on other sites

On 17/05/2022 at 08:14, DY444 said:

the biggest risk is cable routing

Indeed. When advising on designing data centres for high availability, one of the serious risks is "the man with the digger" accidentally chopping through the cabling (both data and power cables). Who needs a planned attack when sheer incompetence will do the job far better?

 

Serious data centres typically have 3 or more separate sets of cabling using different physical routing for just this issue. Even then, care needs to be taken regarding the wider networks that the cables are attached to, to avoid single points of failure in the network providers systems.

 

Yours, Mike.

  • Agree 1
Link to post
Share on other sites

3 minutes ago, KingEdwardII said:

Indeed. When advising on designing data centres for high availability, one of the serious risks is "the man with the digger" accidentally chopping through the cabling (both data and power cables). Who needs a planned attack when sheer incompetence will do the job far better?

 

Serious data centres typically have 3 or more separate sets of cabling using different physical routing for just this issue. Even then, care needs to be taken regarding the wider networks that the cables are attached to, to avoid single points of failure in the network providers systems.

 

Yours, Mike.

 

Indeed.  I remember a major failure in the BT network when two supposedly diverse major north-south cable routes somehow ended up in ducts only a few metres apart somewhere in the Midlands.  Eventually the man in the digger came along and the inevitable happened.   

  • Like 1
Link to post
Share on other sites

2 hours ago, DY444 said:

 

Ok fair point but it seems to me that significant disruption would still be caused by such an incident occurring somewhere near the line but nowhere near the box, between Bounds Green depot and Finsbury Park for example.  I accept that the disruption would not be as bad in such a scenario as the northern end of the KX box area would still be useable but it would nevertheless still be bad, particularly for long distance services. 

It's all changed since then - the area is now under York ROC.

I don't know whether they can still use the local NX emergency panels that Kings Cross had - those were tested once a month (but only one at a time) and they were renewed a few years before the power box closed.

When a major station is forced to close it's the commuter service that's the big problem, especially if it happens during the day and they can't get home.  A limited long distance service can be run by diversion to other main line stations in the capital, although other cities might have fewer alternatives.

 

Of course the ROCs are even more of a potential single point of failure than PSBs.  If my understanding is correct there is enough redundancy of hardware and communications systems capability to enable other locations to take over in the event of a disaster at York, as the big banks do with their data centres and ATM networks.

  • Informative/Useful 1
Link to post
Share on other sites

1 hour ago, Michael Hodgson said:

It's all changed since then - the area is now under York ROC.

I don't know whether they can still use the local NX emergency panels that Kings Cross had - those were tested once a month (but only one at a time) and they were renewed a few years before the power box closed.

When a major station is forced to close it's the commuter service that's the big problem, especially if it happens during the day and they can't get home.  A limited long distance service can be run by diversion to other main line stations in the capital, although other cities might have fewer alternatives.

 

Of course the ROCs are even more of a potential single point of failure than PSBs.  If my understanding is correct there is enough redundancy of hardware and communications systems capability to enable other locations to take over in the event of a disaster at York, as the big banks do with their data centres and ATM networks.

 

I'm aware of the change to York ROC.

 

I don't think a limited main line service can be run by diversion to other terminals, not without a lot of pre-planning (which in the kind of incident at issue here isn't going to be the case); this isn't BR with its general purpose train crew depots with wide traction and route knowledge.  AIUI even in BR days, its pre-packaged ready to go plan to divert portions of the WCML service into Paddington at a moments notice, rarely if ever actually worked very well when it was needed.  The idea that for example Azumas could be diverted into St.Pancras or Liverpool St without months of pre-planning is fanciful imo.

 

I am not aware of any capability for one ROC to take over another in a crisis.  I believe it was discussed at one time but I am not aware it was proceeded with.  Having been involved in something similar in another field where it absolutely was required then I'm not surprised.  The amount of equipment and training needed was eye watering and unlike many organisations with so called disaster recovery plans, this one was tested regularly in live operation.  Getting it right was very very expensive.

Edited by DY444
  • Like 2
Link to post
Share on other sites

  • RMweb Gold
4 hours ago, DY444 said:

 

Indeed.  I remember a major failure in the BT network when two supposedly diverse major north-south cable routes somehow ended up in ducts only a few metres apart somewhere in the Midlands.  Eventually the man in the digger came along and the inevitable happened.   

BT are a strange lot when it comes to the routes of calls or connections.  On the Central Wales line when NSTR was introduced the WR contracted with BT to supply direct lines between the local token machines and to the control centre.  By the mid 1980s the rate of token machine failures was going sky high and the WR S&T folk did some investigation and found it was wholly due to comms failures to/from the token machines.  So the next step was to get BT to fully log details of what was happening with their 'direct lines'.

 

And when the results of that were received it was very quickly obvious that the direct lines which BR were paying for were anything but direct, in fact incredibly indirect would have been a far more accurate description.  We had the details to study on the Rules & Regs section (as the operating dept paid the bill) and we found it was quite common for the 'calls' to be routed via Birmingham or Manchester and the list included several where they had routed via Glasgow.  BT's 'explanation' was that their exchanges automatically searched for a clear route  for what they called 'a direct line'and that meant that if more immediately local lines were busy the exchange would search for a route further afield continuing to search until it found one.  No wonder we were getting token system failures.

  • Agree 1
  • Informative/Useful 2
Link to post
Share on other sites

1 hour ago, The Stationmaster said:

BT are a strange lot when it comes to the routes of calls or connections.  On the Central Wales line when NSTR was introduced the WR contracted with BT to supply direct lines between the local token machines and to the control centre.  By the mid 1980s the rate of token machine failures was going sky high and the WR S&T folk did some investigation and found it was wholly due to comms failures to/from the token machines.  So the next step was to get BT to fully log details of what was happening with their 'direct lines'.

 

And when the results of that were received it was very quickly obvious that the direct lines which BR were paying for were anything but direct, in fact incredibly indirect would have been a far more accurate description.  We had the details to study on the Rules & Regs section (as the operating dept paid the bill) and we found it was quite common for the 'calls' to be routed via Birmingham or Manchester and the list included several where they had routed via Glasgow.  BT's 'explanation' was that their exchanges automatically searched for a clear route  for what they called 'a direct line'and that meant that if more immediately local lines were busy the exchange would search for a route further afield continuing to search until it found one.  No wonder we were getting token system failures.

 

Yes the digital PSTN network introduced through the 80s and early 90s had quite sophisticated built in re-routing capability (sometimes a bit too clever for its own good especially under high traffic).  Most of the time though it was quite good at routing calls through the network that otherwise would have failed.

  • Informative/Useful 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...