Guide to rolling your own OSI Monarch Extractor #2 - Decisions, Decisions

Now that you really know your system, you need to make some high level decisions about what you are going to model, and how you are going to do it.

Import Strategy

Your import strategy defines how you import data from GIS and other systems into the ADMS to form the model. You can perform three types of import: Full, Incremental, or Group. Full imports import the entire data model, and for small systems you this might be all that's required. Incremental imports take only the data that has been changed since the last import, and group imports only import the specific groups requested. You can use any combination of methods that suits. If using incremental imports, you need a method of generating incremental change sets from your data, and need to decide which data changes need to trigger inclusion in the change set. For our import strategy we used weekly full imports, and nightly incremental imports. Our implementation was driven by checkpoints in the GIS system, which means that data changes in the asset management database do not trigger changes in the ADMS. This was an acceptable trade off for us, but may not be for other implementations.

Next, you need to decide how to group devices. There are many ways to do this; an obvious choice is to group by feeder. In our implementation we decided that this was to large, and we instead have more granular grouping at the low voltage level. It helps if you can name your groups with a human readable identifier but this isn't always possible; in our implementation there are many small groups, so using internal ids was a more straightforward option.


The first thing to decide is what you want to model. Do you want to model your entire network, from source to load, or just the high voltage network, just your zone substations down, or something else? This decision may be driven by the quality of your data, or you business processes.


Sources are the things that energize your network. You should energize your network from points that have metering telemetry and per phase voltage telemetry, otherwise DPF may have trouble calculating flows between the source and the next downstream point that has metering telemetry.

Line impedances

Line impedances may be supplied in one of three formats:

  • Construction codes
  • Sequence components
  • Impedance matrix

These formats are described in detail in OSI documents, but you may wish to attempt to integrate the data from any existing modelling tools you might use already. For example in our implementation we converted existing data calculated with Leika into impedance matrices in IDF format, so we can be sure that our modelling tools (Sincal) and eMap are using the same line parameters.


AORs are a method of splitting up your network into areas that can have different access permissions. I'd suggest avoid using AOR 1 for eMap devices if possible, as this area is often used for system level alarms. Some system alarms have a setting to change the AOR, others do not. Since filtering alarms based on AOR can be quite useful, it helps to keep these alarms in a separate AOR from your eMap devices.

Once you have the high level design sorted, it's time to implement your extractor. But watch out for some traps... see the next post for details.

| Posted in SCADA | No Comments »

Guide to rolling your own OSI Monarch Extractor #1 - Know Thy System

The company I work for recently embarked on a journey to implement an Advanced Distribution Management System, and the solution we landed on was Open Systems International's Monarch platform. The core part of any ADMS is the network model which contains all the geographic and asset information you need to calculate real time connectivity and power flows for your electricity network. The Monarch platform is does not contain a GIS - it is designed to extract the model from your existing GIS and run alongside, and the software that performs this process is colloquially known as the Extractor. OSI provide extractors for a few mainstream GIS platforms, for example Esri and Intergraph. They do not provide one (at the time of writing at least) for GE's Smallworld, so we had to roll our own. This is the first in a series of posts that describes this journey for anyone else that happens to need to head down this path because there is no documentation provided to help you. Which brings me to the subject of this post: Know Thy System.

Read the rest of this entry »

| Posted in SCADA | No Comments »

Port mirroring with GRE

I recently had a requirement to mirror a port from a physical machine to a virtual machine. Initially I thought it would be pretty trivial, but when I came to implement it, it turned out to be less than straightforward. While it certainly is possible, in a complex data center environment there can be a lot of changes that need to be made which might make it an unattractive option.


If the distribution switch you are using happens to support erspan, then you can set that up and send all the traffic to the VM directly. Your traffic will have the GRE header attached to the packet, but for many applications this may be acceptable. In our case it was not, so I wrote a filter driver using WinpktFilter to strip all the GRE headers off before being passed up the stack to the protocol drivers.

In our case, we also didn't have a switch that supported erspan. So I wrote another filter driver which takes all the packets arriving on an interface, wraps them up in an GRE packet and spits them out the same or different interface. You can put this tunneling application either on the source machine itself or a dedicated machine that has the source machine mirrored to it using a regular switch port mirror - this way you can avoid modifying the source machine at all if it is running critical production processes. The source for both filters is at
https://github.com/Raggles/gremirror - gretunnel tunnels the packets and grestrip strips the GRE headers at the other end. This solution works well with the winpcap driver because winpcap is a protocol driver whereas WinpktFilter is a intermediate driver/LWP Filter driver (depending on OS). I haven't yet tested with npcap, so I'm unsure whether npcap will see the packets before or after they have the GRE headers removed.

| Posted in Networking | No Comments »

update on compiling sall calcs with gcc for SCD6000

It's been a while since I last worked on this, and today I made my first (unsuccessful) attempt to load a gcc compiled elf onto a SCD6000. In the previous post, I outlined how a sall file can be turned into an elf outside of the RTU Station environment.  The progress made in this post is the small step of using the gcc toolchain from within the RTU Station environment.  This is required, because any points used by the calc need to be parsed and included for the rest of the RTU configuration (these points are stored in the defs.h file, which gets renamed to <calcname>.h).  The details are in on github.

Unfortunately, the first attempt to load an actual file didn't work (the calc shows up in the error state in the RTU).  This is not at all unexpected, and now begins the tedious process of trying to understand why.  This may prove to be quite a challenge, given that we can't really observe the internals of the RTU, but I'll keep chipping away at it.



Reverse engineering the innards of System Platform #1 - Packages, Templates, Objects, Primitives and Attributes

In order to try and understand various aspects of System Platform a little better, I have been poking around the inner workings of the software and discovering many interesting things, which I intend to share in a series of posts on the subject.  So here goes with the first one about Packages, Templates, Primitives and Attributes - settle in because it's a long one.

Firstly, we must must realise that the templates, primitives and attributes that we are going to talk about here are not the same as the templates and attributes that we are used to dealing with in the IDE which I will call IDE templates and IDE attributes from now on.

Templates are a base object type, such as $UserDefined, $Symbol, $DiscreteDevice and so on.  This list of templates is in the database in table template_definition, and for example mine looks like this:
Read the rest of this entry »


Google Earth KML Generator for Radio Networks

One thing that really bothers me sometimes is having the same data in multiple places, and having to manually update the same data more than once.  So I was bothered when a colleague recently spent a few days creating a google earth file for our communications network (>150 nodes), even though we had a spreadsheet of all the locations of each node already.  So, in a couple of hours I came up with this program which converts a spreadsheet or csv file into a kml file which can be imported into google earth.

Take the following data (this is not a real network, I just picked a bunch of random points):

Drop the spreadsheet onto the CsvToKml program, and it spits out a kml file, which looks like this in Google Earth:

Now every time we update the spreadsheet all we need to do is feed it to CsvToKml and we get a new google earth file.

| Posted in Software | No Comments »

protplot - now with Fault Level annotations and Chance T fuses

I finally got round to adding fault level annotations to protplot - this was the last feature missing from protplot that we did have in the old spreadsheets that protplot is designed to replace. I have also completed the tedious task of transcribing the graphs for Chance T fuses as well so protplot is pretty much feature complete as per my original plans. There are some more things I'd like to work on (highlighting non grading portions of the graph, and interactive curve adjustments for example), but I don't have a timeline for any of that at this stage.

| Posted in Software | No Comments »

Loopback interfaces on RX1500

Recently we decided to convert a bunch of layer 3 radio links into layer 2 links in order to simplify our OSPF routing database, work around a couple of nasty OSPF bugs in the radios and achieve better balancing when there are equal cost paths.  However this took me down a bit of a rabbit hole that I wasn't expecting - consider the topology change below:

On the left, all of the central routers interfaces are in one vlan/subnet.  On the right the interfaces all have their own IP addresses.  This creates a problem - what IP address does one use when wanting to ssh into the device?  We can use any of the IP addresses of course, but what about monitoring via SNMP?  What about terminating VPN endpoints?  Which IP do we choose, when any of the radio links might be down for whatever reason, and therefore the associated IP address will be unavailable?
Read the rest of this entry »

| Posted in Networking | No Comments »

Crosswire Webclient

Crosswire is a dispatch platform that is compatible with SIP, analog and digital radio.  It comes with a Java client which you pay for per instance, but there are occasions where you may want additional people to be able to listen in.  To this end I have developed a web client that watches the crosswire mysql database and plays out the audio in semi real time.  By this, I mean that you have to wait for each call to be persisted to disk before you can listen to it which creates a delay of at least the length of the call before you can listen to it.  It is written in ASP (C# Razor), source is here.

| Posted in Software | No Comments »

Adapting ospf-visualiser for use with RuggedCom RX1500

A handy thing to see in an OSPF network is a visual view of the active paths and costs.  There are a couple of expensive tools around that can do this very well, but there isn't much around that can do it for free.  One such free tool is ospf-visualiser which can take output from quagga and print out a pretty picture of the network.  If you have telnet enabled on a machine with quagga then you can telnet to it and everything is supposed to work, but it isn't compatible with RuggedCom devices out of the box.

Therefore I have extended ospf-visualiser so that it can SSH to a RuggedCom ROXII device, log in using the given credentials and extract the required data to build the model.  Rather than write a new parser for the RuggedCom ospf command output, I have opted to log in to the maintenance shell and run the quagga commands directly so the existing data parser can be used, which means that the total amount of code changed is actually quite small.

New SSH options for source data

New SSH options for source data

Output for example RuggedCom network

Output for example RuggedCom network

The next step will be to enable live listening to LSA packets so that the visualisation is truly live, but for now the source and binaries are on Github