rrinkerOnly reason I would want to make the board even smaller is to save on board cost
my understranding is that the least cost board is typically some standard size. Cost can be reduced by fitting more than one layout on the standard size board: 1/2, 1/3, 1/4 ...
rrinkerI'm not adding any extra components by using SPI vs I2C. By using SPI and two different CS lines, it simplifies the code - I can have one routine that writes the output bits to the output port expanders, and another that reads the input bits, without worrying about keeping array elements straight or anything.
Most designs minimize hardware ($$) at the cost of software.
If you need separate chips for input and output, and need just one input or just one output, then you need at least two chips.
the cost in complexity is a configuration command that can specify which bytes (or bits) are input/output. Another approach is to designate/configure one port on each chip as input and the other output.
each update (write) command can include only as many output bytes as are needed and the response reads/returns only as many input bytes as are congfigured (or possibly all bytes providing confirmation of output data).
How simple does the code need to be? It's not complicated for firmare to keep track of such things.
it may be useful to reconfigure boards that have been in service for years when the layout is changed and make it easier to squeeze in some additional I/O w/o adding additional hardware.
rrinkerThe other thing is that all my nodes will be identical, at least it terms of firmware.
yes, the code for each node should be generic and identical (see above).
rrinkerThe other option I suppose is make the whole thing up out of multiple smaller boards
isn't this what you're doing by having a separate servo board?
greg - Philadelphia & Reading / Reading
JLCPCB, it's stricly by size, until you get down to their one standard size that has the promo pricing - which is 100x100mm. My rev 2 of the servo boards is a good bit smaller than the initial version, the initial ones were over $5 per booard, in a quantity of 5, the new one is under $4, Cheaper if I order more, but I'm not going to order 100 up front, I want a small order so I can build one and validate it. Electrically the first one was fine, matched the schematic, but the arrangment of the input protection resistors was a bit messy and inconsistent, and both the symbol AND footprint for my relays was backwards - the NC contacts as draw were actually the NO contacts when I had a physical relay in my hands. That would cause the most common state of one of the relays to always have the coil energized - or I could just put the relay on the back side of the board and solder it on the component side.
Part of why I am using a standard size input and output array is that in the locations where today I don;t think I need the full I/O complement, and leave off a couple of the port expanders, if later I do need the I/O, I can just add the missing chips, power the board back up, and I have the extra I/O pins. Or just swap boards around - I AM using DIP switches to set the address, rather than hard coding it, so it's easy enough to swap boards if needed - especially if I use some sort of solderless connections for all the I/O pins.
The multiple boards thing - I was thinking more how the the cpNode is designed, it has just the on-board I/O from the Arduino Leonardo they use, then they have add on boards, with either one or two MCP23017's, that daisy chain on via I2C to add additional I/O if needed. The main reason I'm keeping the servo boards separate is, while there are digital PWN chips that can drive up to 16 servos, I was thinking on how many turnouts may be in a location, as opposed to running long servo connection cables all over the place from a centralized board that drives a lot. 2 is a good number, as a crossover would have 2 servos. In a busier area, like a yard, it could be advantageous to have a board that drives more servos, since there will be a lot of turnouts clustered together, plus they wouldn't need remote control from the dispatcher. I may do a custom yard board, which would have route control, just select a track and it gets lined.
Oh - early on, with JLCPCB, you could panalize multiple designs, say you had 4 designs that would combined fit within the 100x100mm limit - you get 10x for $4, so you'd end up with 10 each of 4 different boards. They now catch that, and you are restricted to a single design for it to count. So for a small board - I'm thining something like a current transfomer block detector - you could proobbaly get 8 of them on each 100x100mm board, just have to snap them apart once you get the boards. that would give you 80 boards for $4! It's too cheap to NOT do a PCB now. Shipping is where they get you of course, it's around $15 for the 10 boards for express shipping. Slow boat method is only a couple of dollars less, not worth it.
--Randy
Modeling the Reading Railroad in the 1950's
Visit my web site at www.readingeastpenn.com for construction updates, DCC Info, and more.
rrinkerPart of why I am using a standard size input and output array is that in the locations where today I don;t think I need the full I/O complement, and leave off a couple of the port expanders
are you building a board with multiple expanders, instead of a smaller board w/ just one expander and using only as many boards as you need?
with separate boards, the I/O boards could be located closer to the devices they need to connect with a daisy chained I2C connection back to the local processor node.
rrinkerhe board back up, and I have the extra I/O pins. Or just swap boards around - I AM using DIP switches to set the address, rather than hard coding it,
you might consider using the solder jumper pads to save space and cost, if not program it into EEPROM.
The boards with the IO expanders are the noodes - to do all of it with a whole bunch of simple boards with just 16 or 32 bits on each one would end up requiring too many nodes, even using 1/4 load drivers. And with all the other noisy electronics under a DCC layout, I'm not sure I2C is robust enough to run any sort of distance, reliably, but I haven't really looked into it in any depth.
Actually I will probably just use jumper headers, not a DIP switch, although you can get insanely cheap DIP switches.
Nothing on my nodes is final yet, I haven't even drawn a schematic yet. It's all mostly just in my head, and soome notes I wrote on my design ideas at this stage. So anything is subject to change. Only the servo controller is pretty much finalized, I already did the rev 1 board, fixed the mistakes, and I've been staring at the rev 2 board for a few weeks thinking if I stare long enough I will find something else to change, but I should probably just get the first 5 made so I can move forward with this.
why not build a node composed of an arduino and one or more (up to 8) expander boards connected to arduino using a wired i2c/spi bus.
this way each node is somewhat flexiable, can have 16 I/O pins w/ one board, 32 with a 2nd board, ... Each node could have as many as 128 I/O pins using 8 chips on an I2C bus.
i'm thinking of mounting an arduino nano and 3 of my I/O boards on a 2+" wide strip of masonite that can be tested as a unit before being mounted in the panel at the layout i'm helping to build.
That's the neat thing about using SPI - the Arduino library supports multiple pins for CS lines - so for a given CS pin, there can be 8 expanders. Insane amounts of IO, for those that insist on doing the old centralized method, or have something HUGE to control. Not sure what the latency might be with an Ardunio type micro runnign it all. Be good for the full-size CTC panel we have in the museum - someone was supposed to be working on automating it with a PLC, but I haven't seen any progress. As if I don't have enough to do, I suppose I could volunteer to take over the task.
A good middle ground might be to make the node board with the 328 and the RS485 interface, and a pair of port expanders, because every node will require one set of inputs and one set of outputs. Additionals could be added on as external boards. That would probably be the most economical use of board space. Might even get more than one expansion board in the 100x100 space, making them cheap to produce.
rrinkerNot sure what the latency might be with an Ardunio type micro runnign it all.
i've noticed a delay between issueing a command to toggle a bit and seeing an LED change. I'll guess that the I2C library bit-bangs and not too quickly. This might be a concern if the are many I2C operations to perform. Would be prudent to capture and commands, send a response and then perform the I2C operations. Have no idea if the same it true for the SPI bus code.
I2C is 100kHz, 400kHz, or 1.7MHz, per the MCP23017 data sheet. The SPI runs up to 10MHz. All but the two slowest speeds put the data rate limit on the RS485 bus, not the downstream bus to the expanders. But I was more talking about an extreme case - again, since the SPI library allows using any other digital pins for the CS line, on a 328 that leaves 8 pins (keeping TXD and RXD for the serial port, and the 3 lines neesed for SCK, SI, and SO). 8 chains of 8 expanders, 16 bits per expander - a theoretical 4096 IO lines from a single microcontroller. Which is I think way more than the oold Sunset Valley used with the original CMRI. And not really practical, that's a LOT of wire running all over the layout to centralize it all. And the time it would take to read/write all those expanders, with a 16MHz clock rate - I think there would be a HUGE latency.
I do have all the parts I need - RS422/485 line drivers, some MCP23S17s, and a USB to RS485 adapter. I wanted to finish the servo controller first, but I've cleared off the breadboard I had my test circuit on so I can give the node a try. I did already try the very basic test circuit and sketch from the original CMRI Arduino library - that one follows the full protocol and so I was able to test with JMRI, making a couple of LEDs turn on and off with JMRI.
rrinkerAll but the two slowest speeds put the data rate limit on the RS485 bus
sounds like you're worried that the number of bytes transmitted overI2C/SPI will interfere with (delay) rs-485 messaging.
as i said earlier, because I2C/SPI may run slower than RS-485 (9600), you capture the command from RS-485 and immediately send the response. Then process the received command by writing to I2C/SPI and read state for the next response. There should be sufficient time between rs-485 messaging to perform the I/O
if somehow there was a geographic space on the layout requiring so much I/O that it delayed handling rs-485 messages or the command involved too many bytes of data, you can always use more than one node to service the I/O in that space.
if a node was composed of an arduino and one or more expander boards daisy chained together and adding another board affected performance, you could simply break the chain in half, add another arduino as a node and move half the chain to the new node. yes, the node and location of I/O in the master application needs to change, but no code except to handle the I/O being added.
i don't think i'm suggesting anything that Bruce Chubb didn't envision back in 1985 with the technology available then.
I'm not worried about it, I think a 16MHz 328 will be more than fast enough. I was just postulating the insanely impractical fully maxxed out system. Anything in the 'normal' range of ports, I don;t think will be even a slight issue. The whole point of a multidrop bus is that you can distribute this stuff - making a node with 64 bus extenders is the complete antithesis of a distributed architecture.
The same type of system is used in DMX for theater lighting, and it has no problem generating various color changes, moovements, and flashes in time to any style of music. Although I doon;t think there is even a checksum on DMX, or if so, the very simplest sort. And the controlled always sends the full 256 bytes, each device just counts the framing bits until it gets to its address and then uses the next byte as its data (or bytes, in the case of a device that uses multiple channels, a 3 axis 3 color light would use 6 channels, though there are some devices that use more than 8 bits per color). Neat stuff - I was going to create a protocol to drive LED drivers for my light strips, as initially I figured a 'professional' thing like DMX would be too expensive/complictaed to mess around with, but it turns out the protocol (over RS422/485) is insanely simple, and nodes that can drive all the LED strands I want (2x white, 1x RGB, and 1x Blue) are not expensive at all - or I could make them, PWM from the micro driving MOSFETS to switch the high current. The commercial one I got to mess with I think cost $9 or so and it has 12 channeos, so two sets of the listed LEDs, a full reel (3 meters) on each channel. Basically lighting for 20 linear feet of my layout (plus a power supply)
rrinkerI was just postulating the insanely impractical fully maxxed out system.
?
The impractical example of having as many MCP23S17's on one microcontroller as you possibly can. With so many to read/write to, I'd expect some latency. But I'm not worried about latency in the real world because such an implementation is completely impractical, simply on the basis of centralizing far toooo much IO in a single spot - THOUSANDS of wires running from all over the layout back to this monster central node. Impractical. Not worth going beyond the thought experiment stage. And any potential issues, are non-issues because it's not really practical to build such a moonster.
There's a reason CMRI evolved from the parallel interface to oone massive motherboard with all the IO, to a serial ditributed set of motherboards, to the current node concept. It's much easier to work with a distributed system, putting the IO near where it is needed. I was merely speculating that IF you did connect up the maximum possible number of port expanders, you'd get more than enough IO, but the poor 16MHz 8 bit micro would take significant time updating so many ports.
Just another example of how bigger isn't always better, it comes with its own set of issues and considerations.
rrinkerBut I'm not worried about latency in the real world because such an implementation is completely impractical,
rather than discuss the impractical, why not discuss real limits?
for I2C, each byte requires 8 bits + ack bit returned from the receiver. Each message has a start and stop bit. An 8 byte message therefore requires 740 bits and at 100 kHz, 740 usec.
this value would double to read just as many bytes and seems more than fast enough if each node were polled every 10 ms, which would allow 100 nodes to be serviced every second.
but i think the arduino I2C library, or at least the way I'm using it, runs much much slower than 100 kHz, so i'm curious.
rrinkerThere's a reason CMRI evolved from the parallel interface to oone massive motherboard with all the IO, to a serial ditributed set of motherboards, to the current node concept.
evolved? from parallel -> one board -> nodes?
If I was actually serious about trying to build something like that, I'd both to check up on what the transfer speed is, and the data overhead, and all that. I was merely pointing out the biggest possible config (and that's not entirely true - there are other ways of increasing the port count that can cascade to more levels but really, why would you?)
Yes - CMRI hardware has evolved over time. The original MR articles were for a parallel interface system which used a custom adapter card to fit specific brands of computers, because the world still wasn;t PC and clone standardized back then (Bruce Chubb used a Heath H8, 8080 machine). That went to the IOMB, fitted with multiple cards using 8255 PIO chips to get all the ports yuou needed - there was no real way to distribute any of that. At the tail end of the original series, a serial card was introduced to provide the computer to motheboard connection, using a 68701 for the interface. It supported RS-232 as well as multi-drop 422, so you could distribute motherboards around the layout, with just enough I/O cards fitted to handle a given section. A bunch of years later, there came the SMINI which combined the serial interface and ports in one card, which you could distribute around the layout.
In between there, things changed and the SUSIC changed to a PIC processor, and the IO node cards for the original motherboard changed from 24 ports using an 8255 to a 32 port card using simple buffer/driver chips.
That's what I meant about CMRI evolving over the years.
rrinkerThat's what I meant about CMRI evolving over the years
thanks i didn't know that
There's still comaptibility with the old stuff, and the protocol hasn't really changed. But the only way you'd have the old 8255 IO cards would be to have had them all along, you can't buy those any more. The books don't even have information on them - it's a downloadable file from the JLC website to get the chapter containing the information on the old DOUT24 and DIN24 cards. Both the SMINI style and the IOMB style using the new 32 port cards and the SUSCI are available, so you can still do centralized or distributed.
The cpNode the Seth Neumann and others are building, they ant to take that places - make it more like a CTC control point with local intelligence that talks to the dispatcher console. More like how the real thing works - the DS, via the CTC panel, sets a route for, say, an eastbound train to cross over from track 1 to track 2. The CTC console doesn't say anything about what aspects to display on the affected signals - that logic is local to the interlocking. This sort of thing would simplify the code on the computer, since it would jsut have to read the buttons and levers and transmit the requests to the proper nodes, or maybe even eliminate the computer if a physical CTC panel is used, it doesn't really take that much processing power to read a bunch of inputs and generate some CMRI packets.
In the end, I see this as more complex - each node needs custom programming to handle the specific track arrangment, each node has to do a lot more, whereas the traditional system, all nodes can be the same, all processing and logic occurs in the main computer. However, I do find it interesting, replicating all the relays of the prototype with some Arduinos or similar. And a more rubust comm protocol than a pulsing relay over a telegraph line.
rrinkerThere's still comaptibility with the old stuff, and the protocol hasn't really changed.
doesn't a messaging interface and nodes that simply input and out bits make the hardware irrelevant as long as the node supports the messaging?
rrinkerThis sort of thing would simplify the code on the computer, since it would jsut have to read the buttons and levers and transmit the requests to the proper nodes,
isn't it more cost effective to simplify the hardware by embedding the logic in well written software?
not sure what you're advocating
I've always felt the centralized logic was easier to set up, all in one place , the remote components providing the actual IO are then simplistic deivces that just need to read/write the IO ports to and from the comm bus. ALl identical, each one identical to the others other than perhaps some addressing if required by the comm bus.
Different system, same idea - the Digitrax SE8C is a 'dumb' device that can drive 32 signal heads, 8 stall motors, and has 8 inputs as well. It has no internal logic, it simple activates outputs or sends messages based on input status. All it needs is an address to function, any logic to set a given signal aspect based on one of the inputs requires a computer to do the logic, usually JMRI or something similar, but you can DIY. There are other 'dumb' Loconet devices that work the same way. Programming the logic, though, is the same, regardless of who makes the board.
Then there are 'smart' boards like the SIG20 from Logic Rail. It talks to Loconet, but it ALSO has local logic you cna configure so it can run with no computer involved. That programming is a whole raft of CVs to set conditions and trigger output pins. Not nearly as user friendly as JMRI's Logix. If all you want are animated signals, not complex interlocking, maybe that's a 'better' way to go, no computer hooked up to the layout. Although LOTS of things with DCC are made easier with a computer connected, not just running a signal system.
I also think the computer is more flexible, if it comes to changing things. Instead of reprogramming individual boards, you can insert a new junction and edit the existing logic to handle the new control point between two existing ones without touching the two existing control point boards on either side.
BTW I was was looking up some of the info on SPI and Ardunio. Seems the 16MHz 328 will run the SPI at 8Mbps, if it is the master. You can use SPI to link two Arduinos, in that case, the limit is the slave unit, which runs the SPI bus at 4Mbps, System Clock /4. So it will read/write the bus extenders at 8Mbps, or 1MBps, and it's only a few bytes to command one of the extenders to send both bytes, or to send both bytes to a particular address. So I don't think there will be a problem for a node that has say 32 bits of output and 16 bits of input.
I2C on Arduino seems to always run at the slowest speed. I didn't dig in for more technical details oon why this should be, even the 8 bit 16MHz 328 has enough processing power to run faster than that.
It CAN go faster than that for SPI and possible I2C, if interrupts are used. The stock numbers are based on no interrupts. And the 32 bit ARM based Arduinos only go faster by the clock speed factor, not anything else, so a 48MHz ARM-based Arduino still is 24Mbps SPI, despite having 3x the clock speed AND 4x the bit width. At least that's what I've read so far. Since the ARM micros are only available as SMD< it's not likely I will ever use one anyway.