Roboteam example

Started by AADPL, January 16, 2024, 01:24:03 PM

Previous topic - Next topic

AADPL

Apologies if I've just been unsuccessful in finding an existing topic, but I was wondering if there was a tutorial or example file for making two robots work together in Roboteam using KUKA|prc?

AADPL

Apologies, I clearly didn't look hard enough and have now found this thread.

The second robot there is giving an error of "1. An error occurred while synching the robots' commands. Please check if RoboTeam is setup correctly and if both robots use the same number of synchronization commands."

Furthermore, I am unsure of how to change the root points of the robots?



Quote from: Johannes @ Robots in Architecture on February 23, 2023, 05:48:06 PMSorry for the late reply. Unfortunately we do not support the RoboTeam function where the other robot acts like a synchronized, external kinematic system.
Also, in the discussion, this is mentioned. Could you please give an example of the limitation of this? I.e. is Video 2 not possible in that thread?

Johannes @ Robots in Architecture

Hello,

Sometimes it needs a re-compute in Grasshopper, do that and then move the slider.
You can change the root point of the robot via the Custom Robot component. Plug it in as a reference and change the root point.
What is not supported is that when one robot moves, the second robot is constrained to it, i.e. it follows its movement.

Best,
Johannes

AADPL

Hi Johannes,

I've just got to the stage in Roboteam of trying to run my first program. I've got the robots to jog in a kinematically linked way and they correctly follow each other, and my calibrated tools are both on tool #1.

I have set up the grasshopper file as as attached below. The KR60 (master) robot, with its root at 0,0,0 is running the program no problem, but then the KR30 (slave) robot with the custom robot root is not able to run the program. When it begins to run the home position fold, the robot returns the error of "Array index inadmissible", which I can only assume is in relation to the Base or Tool numbers.

Any ideas what could be the issue?

Best,

Alex

Johannes @ Robots in Architecture

Hello Alex,

For one robot, you have the "Hardcode Tool and Base" option (in Advance/Code) enabled, could you try disabling it?

Thanks,
Johannes

AADPL

Hi Johannes,

Nice, that fixed it, I must've clicked it when playing around and forgot to unclick it!

Another quick question, why is the first command provided to the Core component converted to a PTP move when it is provided as a LIN move? Is there a way to undo this?

Best,

Alex

Johannes @ Robots in Architecture

Hello Alex,

It's still weird why that would cause an error message, but I'm happy that it got resolved!

Regarding the PTP, it just avoids a lot of problems, because you cannot set the posture of the robot via a LIN movement. So in the settings via the Initial Posture that needs to be actively set. You can avoid it by manually setting a PTP position. But going from the start position to the first position in a straight line usually leads to problems with singularities etc.

Best,
Johannes

AADPL

Hi Johannes,

Quote from: Johannes @ Robots in Architecture on February 23, 2023, 05:48:06 PMSorry for the late reply. Unfortunately we do not support the RoboTeam function where the other robot acts like a synchronized, external kinematic system.
It would be possible to integrate it with a reasonable effort, but we have never needed it so far.

Just as a final point on this thread, I would like to register our interest in this becoming a feature at some point!

Do you have any recommendations for mocking this functionality up i.e. a dynamic/moving base? I was thinking I could just take the planes from the Analysis Output of one robot and then Orient the planes for the other robot accordingly?

How is the "Frame" component supposed to be used? The tooltip says "e.g. for setting the base system without going through the GUI", but I cannot figure out where that should be plugged in to function?

Best,

Alex

Johannes @ Robots in Architecture

Hello Alex,

You can right-click the Core component to set the base dynamically.
It's definitely possible to mock something like this up, however you might have problems with the normalized simulation slider. Because when you change the base, the toolpath changes and through that the time changes.
You may want to provide a single movement out of the toolpath, or have KUKA|prc generate all movements for one robot and then generate the corresponding movements for the other robot. That might also work fine via the Analysis component.

Regarding that becoming a feature: The annoying part is the communication between two components in Grasshopper. Of course it is possible in the "background" (as it is already doing), but the more such features there are, the less transparent the dataflows become. At least in my opinion.

But I'm very open to suggestions, how that could be implemented!
Best,
Johannes

AADPL

Hi Johannes,

So I've had a go at mocking up the robots working together with MotionSync and that's working fine if a little wobbly but I found it a bit too hard to figure out getting the output of one robot to then recalculate the base dynamically of the second robot.

So here are some of my thoughts, but they might be a little scattered.

Quote from: Johannes @ Robots in Architecture on February 05, 2024, 02:14:27 PMYou can right-click the Core component to set the base dynamically.
It's definitely possible to mock something like this up, however you might have problems with the normalized simulation slider. Because when you change the base, the toolpath changes and through that the time changes.
One issue with what you describe here is updating the base will surely regenerate the KRL code as well, so it would be hard to get a continuous program out of an approach like this.

Another issue is that surely there is a piece of information missing there, since if the only way to change the robot base is by updating the CORE component, then that also loses the information of where the second robot is located with respect to the first, i.e. the robot root.

The analysis output block also isn't fantastic for this because the planes output isn't directly linked to the number of input commands. I think this is primarily because it outputs planes for the movement from the home position to the first command. It makes the planes output useful for making a trail for the toolpath or something but its very hard to match up the output of one analysis block to the input of another Core component.

It would be useful if the robot geometry output or the analysis output included an axis planes output, so that for each cartesian target you could see what the joint values and locations are, but mainly just so that there is a plane outputted at the TCP on the geometry.

And now the important bit... maybe:

To me the easiest way to get this working is to just use the kinematically linked base on the controller so that essentially robot #2 is working in the coordinate system of the base/frame of robot #1. I'm working in the context of a really simple plywood drawing board attached directly to the flange with bolts through the drawing surface, and the resulting base definition taken from calibrating on the controller is:

So the resultant XYZABC KRL code outputted for robot #2 would all be based around the World XY in Rhino, and the robot would just translate/orient that to the base/frame of robot #1, in terms of running on the robot, and then in terms of simulating on PRC then it would just be orienting them to the TCP plane that I discuss in the previous paragraph.

But in the mean time, I can't think of a way of getting robot #2 to output KRL code based around the origin... I think this is because the only way to define the robot root is by using a base, but arguably there should be a way to define the robot root AND define a frame/base.

Not sure how clear I've made myself here...

Johannes @ Robots in Architecture

Hello,

Regarding the planes in the Analysis component, there is an ID output for every position and if it's an integer (10) then it's a programmed movement and if it's a float (10.43) it's interpolated. Maybe that could help a bit.

For the kinematically linked base, I guess that you could simulate that by transforming the positions according to the other robot's movements, though, of course, that would mess up the KRL code as the positions would be in the global coordinate system and not in the other robot's base. So I think that would require two steps, one for code generation and the following one for simulation, with the appropriate transformations.

We are currently in the process of a full rebuild of KUKA|prc, and this feedback is very important to me to decide how to lay out the software architecture. But within the context of the current version of KUKA|prc, unfortunately I don't think that it's feasible for me to integrate the linked base any time soon, especially as we don't have a RoboTeam setup ourselves...

Best,
Johannes

AADPL

Ah okay that all makes sense re: floats and integers and transforming the positions. I'll give that a go.

Also very exciting that PRC is being fully rebuilt! In what sort of timeline will that happen?

If you ever want to collaborate more closely on testing Roboteam options and functionality in the future we'd be more than happy to lend a hand, also if you're visiting London and want access to our Roboteam cell then we'd also be more than happy to facilitate!

Best,

Alex

Johannes @ Robots in Architecture

Hello Alex,

Ask me something easier, with RiA on one side and the university on the other side, the time for the "new" PRC usually happens when my schedule clears unexpectedly.

Honestly the challenge is that KUKA|prc is now very well tested, stable, and relatively fully featured, so doing something that does "more" is hard, considering that users don't see whether the code behind it is janky or well structured ;) But the next PRC will definitely have some nice, new things that are not possible with the current KUKA|prc releases!

But thanks for the offer regarding testing. RoboTeam is definitely on my todo list to explore further, once we have got a suitable project that requires it, I might take you up on the offer!

Best,
Johannes

AADPL

Hi Johannes,

I'm having a go at translating the commands into the coordinates of the base system.

The way I want to do this essentially involves switching bases after the first PTP move is complete, to then switch to the kinematically linked base.

The issue is that currently I think the way that the base is initialised means that it does not switch from Base #2 to Base #3.



So here, I tried two methods. I tried just copying the FDAT_ACT command on line 18 down to line 33 and then change "BASE_NO 2" to "BASE_NO 3". I also tried just inserting the $BASE = BASE_DATA[3]. Both of these did not change the "Base selection" in the top right at all, and if I look at DISPLAY>VARIABLE>SINGLE to monitor the $BASE variable, this does change, but just not in the "Cur. tool/base window" in the top right...

Any thoughts how to change the Base?

Best,

Alex

AADPL

Hmm so I've now managed to get it to update on the "Cur. tool/base" window, by either using $ACT_BASE = 3 or BAS (#BASE, 3), but this hasn't fixed the issue. When looking at the "Display" -> "Actual position" window, the coordinates don't actually change despite the base changing. For example, if I use the "Cur. tool/base" window to change the base manually, then the coordinates switch from world to coordinates in the base system.

Any thoughts?