Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Asynchronous PDI updates #256

Open
Rahix opened this issue Dec 12, 2024 · 1 comment
Open

Asynchronous PDI updates #256

Rahix opened this issue Dec 12, 2024 · 1 comment
Labels
feature New feature or request

Comments

@Rahix
Copy link

Rahix commented Dec 12, 2024

Moving this to a separate issue from #255 to keep things organized. Context:

In the ethercrab examples, the application cycle is always synchronized with the bus cycle from what I understand. Calling rx_tx() on the group pauses the application cycle until the bus update has occurred. For a lot of usecases this makes sense, of course.

However, I was trying to understand whether ethercrab also supports keeping application and bus cycle asynchronous. So the application continuosly updates output values in the PDI and the bus picks up whatever values are available at the time it gets around to updating the I/O again.

I wanted to clarify on your comment, @jamwaffles:

FWIW my intent for the API was that if the user wants to read/write the PDI in another thread/task, they need to bring their own synchronisation primitives, so everything provided by EtherCrab should be Send + !Sync (if I've got those bounds right, i.e. usable in one thread only).

What would you envision the user synchronization to look like here? My approach would be to then keep a separate application PDI and copy that over synchronous to bus updates:

let app_pdi = Arc::new(tokio::sync::Mutex::new(([0u8; 16], [0u8; 16])));

tokio::spawn({
    let app_pdi = app_pdi.clone();
    async move {
        loop {
            {
                let app_pdi = app_pdi.lock();
                let mut subdevice = group.subdevice(&maindevice, 0).unwrap();
                let (i, mut o) = subdevice.io_raw_mut();
                app_pdi.0.copy_from_slice(i);
                o.copy_from_slice(&app_pdi.1);
            }
            group.tx_rx(&maindevice).await.expect("TX/RX");
        }
    }
});

loop {
    {
        let app_pdi = app_pdi.lock();
        let (_i, o) = (&app_pdi.0, &mut app_pdi.1);
        for byte in o.iter_mut() {
            *byte = byte.wrapping_add(1);
        }
    }

    tick_interval.tick().await;
}

But maybe there are also other ways?

(Not a real feature request at the moment, just trying to play with the concepts and API for now :D)

@Rahix Rahix added the feature New feature or request label Dec 12, 2024
@jamwaffles
Copy link
Collaborator

This is pseudocode so apologies if it doesn't compile, but something like this using nice structs and EtherCrab's derives (you'll need to add more attributes):

#[derive(ethercrab_wire_derive::EtherCrabWireWrite)]
struct Outputs {
    #[wire(bits = 1)]
    a: bool,
    #[wire(bits = 1)]
    b: bool,
    #[wire(bits = 1)]
    c: bool,
    #[wire(bits = 1)]
    d: bool,
}

#[derive(ethercrab_wire_derive::EtherCrabWireRead)]
struct Inputs {
    // You get the idea :D
}

let app_state = Arc::new(tokio::sync::Mutex::new((Outputs, Inputs)));

tokio::spawn({
    let app_state = app_state.clone();
    async move {
        loop {
            {
                let app_state = app_state.lock();
                let mut subdevice = group.subdevice(&maindevice, 0).unwrap();
                let (i, &mut o) = subdevice.io_raw_mut();
                app_state.0.pack_to_slice(&mut o).unwrap();
                app_state.1 = Inputs::unpack_from_slice(&i).unwrap()
            }
            group.tx_rx(&maindevice).await.expect("TX/RX");
        }
    }
});

loop {
    {
        let app_state = app_state.lock();

        let (i, o) = (&app_state.0, &mut app_state.1);

        // Do something with the inputs out
        dbg!(i.a);

        // Whatever your application logic needs
        o.a = true;
        o.b = false;
        o.c = true;
        o.d = false;
    }

    tick_interval.tick().await;
}

Here, the raw PDI is confined to the TX/RX task. You don't have to do it like this, but yes if you want to have the raw PDI in your app thread you'll have to copy it. Another option to using a Mutex would be a queue or maybe bbqueue.

Also, for multiple SubDevices, the lock granularity is the group PDI as it's transferred and updated all in one go, so if you say had a struct holding the IO state for 3 SubDevices, it makes sense to wrap that whole thing in a mutex or other synchronisation primitive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants