178 Stars, Zero Cost: This Python Repo Guts Paid Stock Screeners
Runs free on GitHub Actions and pushes LLM buy/sell dashboards to Slack or Telegram daily, but the data-source config is where most installs stall.
5-minute deploy, zero cost, no server needed. Runs on GitHub Actions with any LLM API key you already have, pushing daily decision dashboards to WeChat, Telegram, Slack, Discord, or email.
- Zero hosting cost is the real story: GitHub Actions free tier runs this on a daily cron without a bill attached.
- Seven LLM providers supported means you route calls to the cheapest one that clears your accuracy bar, not the vendor's preferred one.
- Seven market data connectors and seven news search APIs in one open-source package beats what most paid screeners bundle.
- Pin to a commit hash before going live; the README flags active Web UI churn that can silently break your dashboard between deploys.
The unit economics of retail stock analysis are getting dismantled from below. A Python repository called daily_stock_analysis, published by GitHub user ZhuLinsen, topped Python trending charts today with 178 stars. The pitch is blunt: it does what paid stock screening platforms charge a monthly fee to do, runs the analysis through an LLM of your choice, and delivers a structured buy/sell dashboard to Slack, Telegram, Discord, or email on a daily schedule. Infrastructure cost is zero. Hosting cost is zero. The only required spend is LLM API tokens per run.
What Shipped
The repository is a Python system that aggregates market data across A-share, Hong Kong, and US equities, passes it through a configurable LLM, and pushes a structured decision dashboard to the notification channels you configure. The feature surface is wider than a typical weekend project: technical analysis, real-time pricing, news sentiment aggregation, fund flow data, fundamental summaries, and a backtesting module for validating historical signal accuracy.
The data layer runs across seven market data connectors: TickFlow, AkShare, Tushare, Pytdx, Baostock, YFinance, and Longbridge. News search routes through a configurable list of seven providers: Anspire (optimized for Chinese-language content), SerpAPI, Tavily, Bocha, Brave, MiniMax, and SearXNG for self-hosted deployments. Optional social sentiment data for US markets covers Reddit, X, and Polymarket via a Stock Sentiment API integration.
LLM routing is model-agnostic: Gemini, Claude, DeepSeek, Qwen via OpenAI-compatible endpoints, Ollama for local inference, and AIHubMix as a single-key aggregation layer for switching between providers. You configure API keys as GitHub Actions secrets; the system handles the routing.
The canonical deployment is a GitHub Actions workflow that runs at 18:00 Beijing time on weekdays. Fork the repo, configure secrets in repository settings, trigger a manual run to validate, and the dashboard lands in your configured channel. The README bills setup time at five minutes via this path.
A local web interface runs on port 8000 and covers configuration management, task monitoring, historical reports, backtesting, and portfolio management with light and dark theme support. An Agent strategy Q&A feature supports multi-turn conversation with built-in analysis strategies: moving average crossovers, Elliott Wave, trend analysis, and roughly a dozen others available via web, bot, or API.
[[IMG: a developer reviewing a dark-themed Python terminal window with daily stock analysis output, multiple API connection logs and JSON response payloads visible on a wide monitor]]
Why It Matters
The category this repo occupies is "LLM-as-analysis-layer for self-hosted retail finance tooling." What daily_stock_analysis does differently from comparable tools is collapse the infrastructure requirement entirely. GitHub Actions becomes the scheduler. An existing LLM API key becomes the analysis engine. Notification channels already in use become the delivery layer. The marginal cost of adding this workflow is essentially the LLM API tokens consumed per daily run, a figure that has been falling steadily across every provider in this space.
The 178 stars in a single day signal something specific about developer appetite right now. Consistent demand exists for capable tooling that an operator owns rather than rents on subscription. The same structural pull drove Home Assistant's install base; the same pattern explains Ollama's pull rate trajectory. Self-hosted, zero-hosting-cost alternatives to SaaS land differently when LLM API call costs have dropped to the point where a daily stock analysis run is priced below noise.
The multi-model flexibility deserves direct attention. A system locked to a single LLM provider incurs a fixed cost per analysis run. A system that routes to DeepSeek or Gemini Flash for structured financial data analysis costs a fraction of that for comparable output quality on this class of task. The AIHubMix aggregation layer in this repo reflects a pattern increasingly common in production self-hosted LLM tooling: abstract the model selection, let cost and capability determine the route per call.
The structural category comparison is what happened to RSS reader subscriptions between 2011 and 2014. A tier of paid products in the information aggregation and filtering category got undercut by self-hosted open-source alternatives covering 80% of the use case for zero ongoing cost. Daily_stock_analysis is making the same structural bet against paid stock screeners. The bet is more credible in 2026 than it would have been three years earlier, because the LLM cost side of the equation has moved far enough.
The limitation worth naming: LLM signal quality on publicly available financial data has not been validated at scale across full market cycles. The backtesting module validates directional accuracy on historical LLM signals, but that is a different claim from forward-looking alpha generation. The maintainer's disclaimer is explicit. Configure this as a structured first-pass filter and evaluate outputs over time before weighting them in any actual position sizing.
What to Try
Here is the evaluation checklist for an engineering lead or indie developer deciding whether this repo is worth an afternoon.
Prerequisites before touching the code:
- A GitHub account with Actions enabled on your fork
- At least one LLM API key: Gemini API (free tier available), an OpenAI-compatible endpoint, or an Anthropic Claude key
Local run path:
git clone https://github.com/ZhuLinsen/daily_stock_analysis.git
cd daily_stock_analysis
pip install -r requirements.txt
cp .env.example .env
# Edit .env: set your LLM key, notification channel config, and stock list
python main.py --dry-run # validate configuration without full analysis
python main.py --stocks AAPL,TSLA # first live run with US tickers
python main.py --debug # first stop when output looks wrong
python main.py --market-review # broad index and sector overview, separate from individual stock analysis
GitHub Actions path (zero hosting cost):
- Fork the repository via the GitHub UI
- Navigate to Settings, then Secrets and variables, then Actions
- Add at minimum: one LLM key (GEMINI_API_KEY is the fastest free option to start), one notification channel (TELEGRAM_BOT_TOKEN plus TELEGRAM_CHAT_ID is the simplest first integration), and STOCK_LIST as comma-separated tickers
- Enable Actions on your fork
- Run the "daily stock analysis" workflow manually to validate the full pipeline end-to-end before relying on the scheduled run
Breaking-change watch: The README includes an explicit note that the Web UI is in active development with known styling and compatibility issues across pages. If you're running this for a team or relying on the web interface, pin to a specific commit hash rather than tracking main. The GitHub Actions deployment path is the more stable surface right now.
News source configuration, don't skip this: Analysis quality depends heavily on which news search APIs you configure. For US equities, SerpAPI or Tavily are the most accessible starting points. Anspire is optimized for Chinese-language content; if you're only analyzing US stocks, skip it and save the API budget. Configure at least one search provider alongside your LLM key. Running without news data drops the sentiment and catalyst analysis layer entirely.
Social sentiment layer: The Stock Sentiment API integration covering Reddit, X, and Polymarket is optional and US-market only. Don't block your initial setup on it. Get the core analysis running and evaluate base output quality before layering in social sentiment.
Pin to a specific commit hash before going live. The maintainer has flagged active Web UI churn explicitly in the README, a silent update to main can break your dashboard before market open with no prior warning.
Ticker format notes:
Chinese market tickers use a different format from US equities. A-shares use numeric codes (600519), Hong Kong uses the hk prefix format (hk00700), and US tickers are standard (AAPL, TSLA). Configure all three formats in STOCK_LIST if you want multi-market analysis running in a single daily job.
The default schedule runs at 18:00 Beijing time. Convert to your local timezone before expecting evening notifications. For US market participants, that landing time varies significantly by whether daylight saving is in effect.
[[IMG: a developer checking a smartphone displaying a Telegram bot notification with a structured stock decision dashboard, buy/sell ratings and risk alert text visible on screen]]
Looking Ahead
The cost trajectory in this space points in one direction over the next twelve to eighteen months. LLM API pricing across every major provider has been declining, and the gap between what a paid stock screener charges per seat and what an LLM API call costs for equivalent structured analysis is going to keep widening. Repos like daily_stock_analysis are early movers in a category that will likely see specialized forks emerge with vertical depth: one tuned for options flow data, one built around European market data formats, one oriented toward crypto sentiment sources.
The signal to watch is whether the maintainer formalizes the data source abstraction layer. Right now, seven market data providers and seven news APIs are wired together through priority rules in configuration. If that gets cleaned into a documented plugin interface, the repo becomes straightforward to fork for specific verticals. That is when the community contribution rate compounds from individual installs to sustained ecosystem development.
Comparable to watch: the Freqtrade project, which traveled from weekend trading bot to serious self-hosted infrastructure with active enterprise forks over roughly three years. Daily_stock_analysis is earlier in that arc, but the structural pattern, broad data integration, model-agnostic analysis layer, zero-cost deployment path, is recognizable.
Sources:
- ZhuLinsen/daily_stock_analysis, GitHub, accessed 2026-04-27
More from the same beat.
93 Stars Today: Cherry Studio Guts the Multi-LLM Setup Tax
300 pre-built assistants in a no-install desktop binary, but the real move is MCP server support wired in without spinning up a separate process.
- One binary replacing four vendor dashboards cuts the daily context-switch overhead for any engineer running a mixed-model workflow.
HiClaw Locks Agent Credentials at the Gateway
v1.1.0 ships Kubernetes-native with 1.7 GB off the image, but the real story is that Workers never touch your actual API keys.
- Workers get consumer tokens only; your real API keys and GitHub PATs never leave the Higress gateway. Cleanest credential isolation in self-hosted multi-agent right now.
vLLM v0.19.0 Cracks Zero-Bubble Scheduling, Guts Speculative Decode Overhead
Speculative decoding and async scheduling couldn't overlap without stalls; v0.19.0 fixes the composition, and anything you benchmarked under the old constraint is worth re-running.
- Zero-bubble spec decode is the throughput unlock v0.18.x couldn't offer; re-benchmark any stack tuned under the old constraint.