The workflow you set up, in the previous lesson, identified all the fields in the form and generated plausible answers to all of them. But there are a couple of fields where the answers could be better. In this lesson, you will give feedback to the agent on the returned answers and ask it to re-answer when necessary. Let's get to it. LLMs are amazing, but they are best used to augment rather than replace a human. Your current form filler does an excellent job figuring out what fields need to be filled in, and gets most of the fields right. But there are a couple where it needs a little help. To take care of those you're going to create a human in the loop workflow, where you can optionally provide feedback to the agent you've created and have it incorporated into the results. So let's get started on that. First, we'll need our imports as usual. And we'll need asyncio. And we'll need our API keys. The changes we're going to make to our workflow are pretty significant. We're going to use the input required and human response event. These are new special events specifically designed to allow you to exit the workflow and get feedback back into it. You used to have a single step which parsed your form and fired off all the questions. Since we now might loop back and ask questions several times, you don't need to parse the form every time, so we'll split up those steps. This kind of refactoring is very common as you create a more complex workflow. Your new generate questions step will be triggered either by a generate questions event triggered, by the form parser, or by a feedback event, which is the loop that we'll take after getting feedback. Then we'll emit an input-required event, and in the same step you'll wait for a human response event. This will pause the whole workflow waiting for outside input. Finally, you'll use the LLM to parse the feedback and decide whether it means you should continue and output the results, or if you need to loop back. That's a lot. So let's get started. First, we bring in our new event types. And now, as usual, we set up our events. There's a new one here. The feedback event. Let's put in our workflow and walk through it line by line. This stuff is all the same. Your setup remains the same and your parse form has changed. Now we've separated the form parsing from the question generation. So after we've parsed the form into fields, when in the context, we set our fields to fill, and then we send it to the generate questions event. The generate questions step, like I said, can be triggered by a generate questions event, but it can also be triggered by a feedback event. We'll see how that happens in a bit. The first thing we do is we get the list of fields to fill from the context. Then for each of those fields, we fire off a query event just like we did before, and we store the total number of fields so that our collect events will know how many fields it should be waiting for. Our Ask Questions event is unchanged, but now in our fill in application form, we're going to emit an input-required event. Just like before, we're going to wait for collect events. It will return none until all of the events have been collected. And once it's done, we're going to turn it into a list of questions. The new thing we're doing here is we're going to set our sets of questions into the context to use later. And now we're going to emit an input-required event. This is going to send a message to the human asking how things look, and asking the human to give some response. In our next step, which is called get feedback, we accept a human response event. So unlike our previous steps, which were always triggered by the event from the step before, we now have a gap. The input required event isn't being emitted by anything. It's going to have to be emitted later. The input required event is being captured by something else, and the human response event isn't being emitted inside of the workflow. It has to come from outside the workflow. We'll see how that happens in a second. Once we get the feedback, what we'll do is we will ask the LLM to decide whether or not the feedback is good or bad. If everything is fine, the LLM will read the feedback and say that things are okay, but if there's any other feedback, it will respond with the word "feedback" and we will use that to decide what to do next. If the LLM said everything was okay, then we'll emit a stop event and we'll be done. If the LLM emits feedback then we are going to emit a feedback event. Cool. So now your workflow is ready to get some feedback, but how do we actually get it? Turns out you already know how. The input required event is an event in the event stream. Just like progress events and text events that you sent previously, You can intercept it the same way that you did those, and use the send event method on the context to send back a human response event. So all of this happens in the code where we execute the workflow. Let's walk through it line by line. This part is the same as you've seen before. We get a handler by executing the workflow. And this part is filtering the events just like we did before. Except this time we're waiting for an input-required event. When we see an input-required event, we're going to print out the form so the user can see what's up. And then we're going to use the input method to get keyboard feedback. Once we've got the feedback in the response variable, we'll send it as a human response event using the send event method. And then will await the handler. So let's execute this and see how that looks. Cool. So here is the combined list of fields and succinct factual answers that the LLM has given us. Just like before, it's done a good job of numbering everything, but projects portfolio is still a list of things that the applicant did instead of a link to their portfolio. So let's say that. Portfolio should be a URL. The LLM says the verdict was feedback. So now that's going to trigger the loop. It's going to try everything again. But we haven't actually found a way to incorporate the feedback yet. So it's just going to do everything exactly the same way it did. We'll incorporate the feedback in our next change. In the meantime, let's just say this is fine. The LLM says the verdict was okay so the workflow halts. Cool. So now, let's further modify things to actually do something useful with that feedback that we've generated. This involves checking if there is feedback and appending it to the questions. In this simplified example, we're going to append the feedback to every question in case it's relevant. But a more sophisticated agent might apply it to only the fields where the feedback applied. Here's our modified workflow. Setup is the same. Parse form is the same, but generate questions has changed. Now we're going to check if the attribute feedback exists on our event. If the attribute feedback exists on our event, that means that it's a feedback event and we're going to use it. So we're going to modify each question that applies to each field to include the feedback. We're going to say it might not be relevant to this particular field, so that the LLM doesn't get confused about why we're applying feedback to every single event. Then, just like before, we generate query events and we store the total number of fields. Ask question remains the same. Fill in application remains the same and get feedback remains the same. So let's execute this cell. And then run our workflow. Everything here is exactly the same as it was last time, except now it's going to understand what to do with the feedback. Okay, we've got the first attempt of the form. As before, project portfolio is a description of the things that the applicant has done. Now we'll give it the feedback. Portfolio should be a URL. We hit enter. The LLM has worked out that the verdict was feedback. Now it's going to try again. And it's going to append that feedback to every single question so that the LLM knows what to do. Cool. Here's the new modified version. And as you can see field six the portfolio has correctly been turned into a URL. That's exactly what we wanted. So let's tell it. That's great. One of the fun things about giving feedback in natural language is you can just say anything and it'll figure out what you meant. The LLM says that's great means the verdict was okay. And so it stops the workflow and gives us the output. Success! Our agent now responds to human in the loop feedback and produces more accurate filled forms as a result. Congratulations.