You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Moving this to a separate issue from #255 to keep things organized. Context:
In the ethercrab examples, the application cycle is always synchronized with the bus cycle from what I understand. Calling rx_tx() on the group pauses the application cycle until the bus update has occurred. For a lot of usecases this makes sense, of course.
However, I was trying to understand whether ethercrab also supports keeping application and bus cycle asynchronous. So the application continuosly updates output values in the PDI and the bus picks up whatever values are available at the time it gets around to updating the I/O again.
FWIW my intent for the API was that if the user wants to read/write the PDI in another thread/task, they need to bring their own synchronisation primitives, so everything provided by EtherCrab should be Send + !Sync (if I've got those bounds right, i.e. usable in one thread only).
What would you envision the user synchronization to look like here? My approach would be to then keep a separate application PDI and copy that over synchronous to bus updates:
This is pseudocode so apologies if it doesn't compile, but something like this using nice structs and EtherCrab's derives (you'll need to add more attributes):
#[derive(ethercrab_wire_derive::EtherCrabWireWrite)]structOutputs{#[wire(bits = 1)]a:bool,#[wire(bits = 1)]b:bool,#[wire(bits = 1)]c:bool,#[wire(bits = 1)]d:bool,}#[derive(ethercrab_wire_derive::EtherCrabWireRead)]structInputs{// You get the idea :D}let app_state = Arc::new(tokio::sync::Mutex::new((Outputs,Inputs)));
tokio::spawn({let app_state = app_state.clone();asyncmove{loop{{let app_state = app_state.lock();letmut subdevice = group.subdevice(&maindevice,0).unwrap();let(i,&mut o) = subdevice.io_raw_mut();
app_state.0.pack_to_slice(&mut o).unwrap();
app_state.1 = Inputs::unpack_from_slice(&i).unwrap()}
group.tx_rx(&maindevice).await.expect("TX/RX");}}});loop{{let app_state = app_state.lock();let(i, o) = (&app_state.0,&mut app_state.1);// Do something with the inputs outdbg!(i.a);// Whatever your application logic needs
o.a = true;
o.b = false;
o.c = true;
o.d = false;}
tick_interval.tick().await;}
Here, the raw PDI is confined to the TX/RX task. You don't have to do it like this, but yes if you want to have the raw PDI in your app thread you'll have to copy it. Another option to using a Mutex would be a queue or maybe bbqueue.
Also, for multiple SubDevices, the lock granularity is the group PDI as it's transferred and updated all in one go, so if you say had a struct holding the IO state for 3 SubDevices, it makes sense to wrap that whole thing in a mutex or other synchronisation primitive.
Moving this to a separate issue from #255 to keep things organized. Context:
I wanted to clarify on your comment, @jamwaffles:
What would you envision the user synchronization to look like here? My approach would be to then keep a separate application PDI and copy that over synchronous to bus updates:
But maybe there are also other ways?
(Not a real feature request at the moment, just trying to play with the concepts and API for now :D)
The text was updated successfully, but these errors were encountered: