OS-Copilot: Towards Generalist Computer Agents with Self-Improvement

Zhiyong Wu1*, Chengcheng Han2*, Zichen Ding2, Zhenmin Weng2,
Zhoumianze Liu1, Shunyu Yao3, Tao Yu4, Lingpeng Kong4
1Shanghai AI Lab, 2East China Normal University, 3Princeton University, 4University of Hong Kong
*Equal contribution

Demo1: Operating Excel Files.

Demo2: Creating a Webpage.

Demo3: Entering Focused Mode.

Demo4: Playing Music.

(Stay tuned! More demos are coming soon~)

Abstract

Autonomous interaction with the computer has been a longstanding challenge with great potential, and the recent proliferation of large language models (LLMs) has markedly accelerated progress in building digital agents. However, most of these agents are designed to interact with a narrow domain, such as a specific software or website. This narrow focus constrains their applicability for general computer tasks. To this end, we introduce OS-Copilot, a framework to build generalist agents capable of interfacing with comprehensive elements in an operating system (OS), including the web, code terminals, files, multimedia, and various third-party applications. We use OS-Copilot to create FRIDAY, a self-improving embodied agent for automating general computer tasks. On GAIA, a general AI assistants benchmark, FRIDAY outperforms previous methods by 35%, showcasing strong generalization to unseen applications via accumulated skills from previous tasks. We also present numerical and quantitative evidence that FRIDAY learns to control and self-improve on Excel and Powerpoint with minimal supervision. Our OSCopilot framework and empirical findings provide infrastructure and insights for future research toward more capable and general-purpose computer agents.

MY ALT TEXT
Running examples of FRIDAY when tasked with (1) preparing a focused working environment, (2) drawing a chart in Excel, and (3) creating a website for OS-Copilot.

The OS-Copilot Framework

We introduce OS-Copilot, a framework to assist building OS-level language agents, accompanied by modular implementations for each component to facilitate agent development.

Planner

The planner component will reason over user requests and decompose complex ones into simpler subtasks. Most importantly, the planner needs to comprehend the agent’s capabilities to generate plans at the correct granularity. To achieve this, it must retrieve relevant information about the agent’s capabilities, such as in-house tools and operating system information, to assist planning.

Configurator

The configurator component takes a subtask from the planner and configures it to help the actor complete the subtask. Our design of the configurator is inspired by the biological nature of the human brain, which has working, declarative, and procedural memory.

Actor

The actor comprises two stages: executable action grounding and self-criticism. In the first stage, the executor proposes an executable action (e.g., a bash command ”mkdir new folder”) based on the configuration prompt and then executes the action in the operating system (through the Bash runtime environment in this example). The critic module will then access the outcomes of the execution and formulate feedback to refine execution errors and/or effect updates to the long-term memory.

The image below shows the overview of the OS-Copilot Framework:

MY ALT TEXT
An overview of OS-Copilot framework.

The FRIDAY Agent

The design principle of FRIDAY aims to maximize generality by equipping the agent with the ability for self-refinement and self-directed learning. We first use an example to illustrate how FRIDAY operates and emphasize its capacity for self-refinement. Subsequently, we delve into how FRIDAY acquires the proficiency to control unfamiliar applications through self-directed learning.

Image 1

(a) Configurator

Image 2

(b) A running example

A Running Example

In the figures above, we use a running example to demonstrate how FRIDAY functions within the OS.

Upon receiving the subtask “Change the system into the Dark mode” (step①), the Configuration Tracker employs dense retrieval to recall relevant information from the long-term memory to construct a prompt (step②). This prompt encompasses related tools, user profiles, OS system versions, and the agent’s working directory.

In this example, no suitable tools are identified (similarities below a specified threshold), prompting activation of the Tool Generator to devise an application-tailored tool for the current subtask (step③). As we can see from Figure (b), the generated tool manifests as a Python class utilizing AppleScript to change systems to dark mode.

Subsequently, with the tool created and the configuration prompt finalized, the Executor processes the prompt, generates an executable action, and executes it (step④). As shown in the bottom of Figure (b), the executor first stores the tool code into a Python file and then executes the code in the command-line terminal.

After execution, the critic evaluates whether the subtask is successfully completed (step⑤). Upon success, the critic assigns a score (using LLMs) ranging from 0 to 10 to the generated tool, with a higher score indicating greater potential for future reuse. In the current implementation, tools scoring above 8 are preserved by updating the tool repository in procedural memory (step ⑦).

However, in the event of a failed execution, the refiner collects feedback from the critic and initiates self-correction (step⑥) of the responsible action, tool, or subtask . The FRIDAY will iterate through steps ④ to ⑥ until the subtask is considered completed or a maximum of three attempts is reached.

Self-Directed Learning

Self-directed learning is a crucial ability for humans to acquire information and learn new skills, and it has demonstrated promising results in embodied agents within Minecraft games.

With a pre-defined learning objective, such as mastering spreadsheet manipulation, FRIDAY is prompted to propose a continuous stream of tasks related to the objective, spanning from easy to challenging. FRIDAY then follows this curriculum, resolving these tasks through trial and error, thereby accumulating valuable tools and semantic knowledge throughout the process. Despite its simple design, our evaluation indicates that self-directed learning is crucial for a generalpurpose OS-level agent.

Experiments

Main Results

We evaluate FRIDAY on GAIA, a benchmark for general AI assistants featuring 466 challenging question-answering tasks. To answer questions in GAIA, language agents need skills to calculate, browse the web, handle multi-modality, and manipulate files, etc.

MY ALT TEXT
Evaluation Results.All results are reported on the private test set, except for the Human score, which is averaged across the dev and test sets.

Self-directed Learning

We perform quantitative and qualitative evaluations to analysis FRIDAY’s self-directed learning capability.

QUANTITATIVE ANALYSIS

To showcase FRIDAY’s ability to master unfamiliar applications through self-learning, we conduct experiments on the SheetCopilot-20 dataset.This dataset includes 20 spreadsheet control tasks, covering various operations such as Formatting, Management, Charts, Pivot Tables, and Formulas, representing typical use cases of spreadsheets.

MY ALT TEXT
Comparison of different agents on the SheetCopilot-20 dataset.Pass@1 refers to the pass rate with each task being performed only once.We highlight the best results in bold.

QUALITATIVE ANALYSIS

Image 1

(a) FRIDAY w/o self-directed learning.

Image 1

(b) FRIDAY after learning text box control.

Image 2

(c) FRIDAY after mastering image insertion.

In our qualitative analysis, we design a task to create a PowerPoint slide to introduce OS-Copilot. The specific content, font, font size, and other details required for the slide are elaborately described in the task instruction.

The experimental results, as shown in Figure (a), demonstrate that without self-directed learning, FRIDAY struggles to effectively control font types, sizes, and the positioning and sizing of inserted images.

Nevertheless, following a period of self-directed learning, FRIDAY acquires various text box configuration tools, such as changing the text color, adjusting the font size of slide text, and modifying the line spacing of body text in PowerPoint presentations, as illustrated in Figure (b).

Further exploration leads FRIDAY to learn how to adjust the size and position of inserted images, ultimately successfully completing the task, as depicted in Figure (c).

Community

Join our community to connect with other enthusiasts, share your tools and demos, and collaborate on innovative projects. Stay engaged and get the latest updates by following us:

  • Discord: Join our Discord server for real-time discussions, support, and to share your work with the community. Click here to join: Discord Server.
  • Twitter: Follow us on Twitter @oscopilot for the latest news, updates, and highlights from our community.

BibTeX


        
        @misc{wu2024oscopilot,
          title={OS-Copilot: Towards Generalist Computer Agents with Self-Improvement}, 
          author={Zhiyong Wu and Chengcheng Han and Zichen Ding and Zhenmin Weng and Zhoumianze Liu and Shunyu Yao and Tao Yu and Lingpeng Kong},
          year={2024},
          eprint={2402.07456},
          archivePrefix={arXiv},
          primaryClass={cs.AI}
        }