Now that you have implemented your own rad application using airflow for orchestration, we'll talk a little bit about what's next. We'll go through production considerations, exam and astronomers ask Astro Rack implementation and discuss additional use cases for gen AI orchestration. Let's get to it. As we have covered in this course, airflow is commonly used as an orchestrator for many different use cases, including different types of Gemini pipelines. In this course, you learned how to orchestrate retrieval, augmented generation, or ragged pipelines, but you can also orchestrate other types of Gemini pipelines. Inference execution pipelines, which orchestrate the process of running a trained machine learning model on input data and collecting. The results are commonly implemented with airflow. This might be batch inference pipelines or another type of inference execution. Airflow can be used to manage automated model training, retraining, and fine tuning. Remember, any Python based notebook can be turned into an airflow Dag or pipeline. Astronomer has built our own real world rack application using airflow. OSC Astro is an open source reference implementation of Andreessen Horowitz alumni application architecture. It is a Q&A alum application used to answer questions about airflow and astronomer. aspects to this open source application. Data is retrieved from sources like GitHub issues, airflow and astronauts, and the airflow sac process with link chain and embedded when open. I and then stored in a website vector database. This pipeline is orchestrated with airflow. Airflow is also used to improve the model performance over time. Users can rate answers given in the application, and that data is processed with link chain and open eye on a schedule to update the data and reboot so that good answers are repeated and bad answers are not. This drag application is extremely useful for airflow developers. Ask Astro can help you write an update docs for any use case you're looking to implement. As an example we can write a query to say, write me a dog that manages a rag. Data retrieval and embedding pipeline. Using alleviate and open AI. Ask Astro will write the Dag automatically for us and we can take this, run it in airflow and update it as needed. This is a real world example implementation of the information used in this course, and will hopefully be a tool in your toolbox for further learning. With Airflow and Gemini, orchestrate an. When you implement RAC pipelines like the ones covering Ask Astro in production and at scale, there are several other things you might consider after you've written your pipelines, you will likely need to ingest data from more sources than you worked with in this course that is easily managed by airflow, either by creating more tags to run in parallel, or in some cases by using dynamic task mapping like you learned about earlier. You may need to do additional transformation of your source data like text champion and selection or preselection of documents. Remember, as long as you can write a Python function to do it, you can turn it into an airflow task. Data quality checks are also easy to implement in airflow and are important for keeping your application up to date. You will likely use airflow for fine tuning and deployment of models and implementing user feedback pipelines. Feedback helps make your application perform better and can be automated with airflow using a pipeline like the example we just showed in Ask Astra. We also want to briefly touch on other types of AI pipelines that can be orchestrated with airflow. While we don't show inference, execution, or batch inference tags in this course, they are commonly implemented with airflow. One example use case is automating a personalized newsletter service. In this diagram, we have two airflow pipelines. One is a traditional ETL pipeline that gets the data needed for the newsletter formatter and creates a template. The second on the bottom is a batch inference pipeline that takes user input and personalizes the newsletter content for each reader by sending the user data in a prompt to an alarm and loading the results into the newsletter template. This Dag can be run as a batch inference tag, say daily, where it will process all new user information, but it can also run using online inference execution. Meaning when a user inputs their information in a web form and indicates they want a newsletter right away, the pipeline sends a message to a message queue which triggers the Dag in airflow via event driven scheduling. This example could warrant a whole class in itself, so we only touch on it at a high level. But for now You can take it as inspiration for other Gemini orchestration that you can implement with airflow. Much like with our Raag application in this course, implementing batch or inference execution pipelines in production also comes with other considerations. You may use airflow for something significantly higher scale, like generating product descriptions for all new products on an e-commerce website. In this case, your Airflow DAGs might look very similar to what you've seen here, but you will need to pay attention to scaling your airflow infrastructure, possibly using a managed service. You may also need to leverage additional scheduling capabilities rather than run your DAGs daily. You may want them to run as soon as data is available, so they can provide the backend for your Gen AI application. An example of this would be a user requesting a personalized recipe based on their purchases from an online grocery store. Event driven scheduling lets airflow kick off that pipeline as soon as the user inputs their request. There are also other packages outside of airflow that can help you implement your Gemini pipelines. For example, the airflow AI SDK developed by astronomer is an open source package for working with alarms from airflow based on pedantic AI. You can call alarms easily from your airflow tasks with the task scale decorator or interact with and orchestrate AI agent calls. The repository contains detailed examples to help you get started. Finally, this course has only scratched the surface of what you can do with airflow to orchestrate Gemini Pipelines. A great resource for further reading is the Manning Practical Guide to Apache Airflow three, written by astronomer. This free e-book walks through everything you need to know about airflow. Three features, including an in-depth batch inference pipeline explanation. It's a great next step in your learning journey.