-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[computer-use-demo] Tool execution hangs with timeout error #180
Comments
Update: I need to correct my initial analysis - further investigation revealed a different root cause. The timeout issues were not related to buffer handling or large outputs as initially suspected, but rather stemmed from event loop management. Here's what I found: ACTUAL ROOT CAUSE:
The FixThe solution was much simpler than my original proposal. The key change was making the async def stop(self):
if self._process.returncode is None:
self._process.terminate()
await self._process.wait() # Ensure process termination Testing Results Background processes Retracting Original Proposal I apologize for any confusion my initial analysis may have caused. This is a good reminder that thorough investigation can sometimes reveal simpler solutions than initially suspected. |
Root Cause Analysis
The tool execution hang appears to stem from several implementation issues:
bash.py Implementation Issues:
Hard-coded 120-second timeout (_timeout = 120.0)
Inefficient buffer handling (polling with 0.2s delay)
Entire buffer decoded at once
No streaming/chunking of large outputs
Current problematic implementation:
Current inefficient approach in bash.py
async def run(self, command: str):
# Entire output loaded into memory at once
returncode, output, error = await run(
command,
timeout=120.0 # Hard-coded timeout
)
return CLIResult(output=output, error=error) # No chunking
Impact on Users
This issue severely impacts users when:
Listing directories with >1000 files (output >16KB)
Reading large log files
Running system queries that return substantial data Users must refresh the page every 5 minutes, breaking their workflow and losing command history.
Base Architecture Findings:
ToolResult class supports output combination
No built-in streaming support
Error handling focuses on immediate failures rather than timeouts
Proposed Solutions
Buffer Handling (Priority 1):
Example implementation approach:
async def run(self, command: str):
CHUNK_SIZE = 8000
async for chunk in stream_command(command):
if len(chunk) > CHUNK_SIZE:
yield process_chunk(chunk)
else:
yield chunk
Implement chunked reading instead of full buffer decode
Add output streaming capabilities
Consider increasing timeout for known large operations
Add buffer size monitoring
Error Handling (Priority 2):
Add specific timeout recovery mechanisms
Implement automatic retry logic for buffer overflows
Add progress indicators for long-running operations
Result Management (Priority 3):
Add streaming result support to ToolResult class
Implement progressive output updates
Add memory usage monitoring
Alternative Approaches Considered:
Process spawning: Split large commands into smaller sub-processes
Output pagination: Add --max-output flag to limit initial output
Client-side caching: Store partial results to prevent complete reruns
Looking forward to feedback on these approaches, particularly the chunked implementation example.
The text was updated successfully, but these errors were encountered: