AI Opportunity Evaluation: YC-Style Scorecard & Risk Analysis¶
Date: March 9, 2026 Team Profile: 5 engineers with NVIDIA Jetson + computer vision expertise Evaluation Framework: YC scorecard, riskiest assumption testing, pre-mortems
Table of Contents¶
- AI RFP/Proposal Response Engine
- AI CRE Deal Screening
- AI Healthcare Voice Agent
- AI Legal Document Review
- AI Sales Call Intelligence for SMB
- Privacy-First Edge Camera Analytics
- AI Visual Inspection for Manufacturing
- Perimeter Intrusion Detection
- SMB Predictive Maintenance
- Construction Site Safety
- Final Power Ranking
- Combination Ideas
1. AI RFP/Proposal Response Engine¶
Concept: AI auto-fills new RFP responses from past proposals. $299-$499/mo. Incumbents (Loopio at ~$24K/yr) are legacy. 4-6 week MVP.
YC Scorecard¶
| Criterion | Score | Rationale |
|---|---|---|
| Problem severity | 3 | Painful but not existential. Teams survive doing this manually. |
| Problem frequency | 3 | Weekly to monthly for most teams; daily only for dedicated proposal shops. |
| Market size | 4 | Proposal management software market ~$3.2B in 2025, growing to $9B by 2035 at 11% CAGR. |
| Existing solutions quality | 3 | Loopio, Responsive, QorusDocs exist and work reasonably well. AI-native competitors (AutoRFP.ai, Inventive.ai) already emerging. Not a greenfield. |
| Willingness to pay | 3 | Companies pay Loopio $24K/yr, but SMBs are price-sensitive. Your $299-$499/mo is ~$3.6-6K/yr -- attractive but unproven at that tier. |
| Buildability in 6 days | 4 | Core is RAG over past proposals + template filling. LLM APIs make this feasible quickly. |
| Founder fit | 1 | Zero connection to Jetson/CV expertise. Pure software LLM play. Team has no unfair advantage. |
| Timing | 3 | LLM capabilities enable this now, but the window is closing. AutoRFP.ai, Inventive.ai already funded. |
| Growth mechanics | 2 | No natural virality. Enterprise sales cycles. Could get word-of-mouth in proposal manager communities. |
| Defensibility | 2 | Low moat. Any team can build RAG over documents. Incumbents are adding AI features (Loopio already has AI Assist). |
| TOTAL | 28/50 |
Riskiest Assumption Test¶
THE assumption: SMBs who currently respond to RFPs manually will pay $299-$499/mo for an AI tool rather than continuing with Word/Google Docs or upgrading to an incumbent adding AI features.
Test in 1 week with $0: Post in 3-5 LinkedIn groups and Reddit communities (r/sales, r/procurement, proposal management forums) with a Loom video of a clickable prototype. Offer "founding member" pricing at $199/mo. Collect email signups with credit card commitment (Stripe checkout page, don't charge).
Evidence bar: - PROCEED: 30+ signups with credit card info from 500 views (6% conversion). At least 5 willing to do a 30-min call. - KILL: Fewer than 10 signups, or signups come only from curious lookers who ghost on follow-up calls.
Pre-Mortem (18 months out, it failed)¶
Most likely failure: Loopio, Responsive, and QorusDocs all ship AI features that are "good enough" within their existing platforms. Buyers stick with incumbents because switching costs are high (content libraries are already built there). Your standalone AI tool can't overcome the distribution advantage of platforms with 1,700+ existing customers.
Second most likely: You acquire early SMB customers but churn is brutal. RFP response is episodic -- customers sign up for a big RFP push, use the tool for 2 months, then cancel. Monthly churn exceeds 8%, making unit economics unworkable.
Early warning signs: Customer activation rate below 40% in first week. Prospects say "this is cool but I need to see it work with my existing workflow." Incumbent press releases about AI features.
Decision: KILL (confidence: medium)¶
The team has zero founder-market fit, the market is being eaten from above by well-funded incumbents adding AI, and from below by AI-native startups (AutoRFP.ai) that are already further along. The $299/mo price point targets an awkward middle -- too expensive for freelancers, not enterprise enough for procurement teams.
2. AI CRE Deal Screening¶
Concept: Ingests rent rolls, OMs, T12s for CRE deals. Deal summary in 60 seconds. $500-$2K/mo. 50K+ underserved syndicators. 6-8 week MVP.
YC Scorecard¶
| Criterion | Score | Rationale |
|---|---|---|
| Problem severity | 4 | Syndicators manually underwrite deals in spreadsheets for hours. Missing a deal or mispricing one costs real money. Hair-on-fire for active deal flow. |
| Problem frequency | 4 | Active syndicators screen 5-20 deals per week. Daily workflow during active acquisition periods. |
| Market size | 3 | 50K+ syndicators is a real number but the TAM at $500-2K/mo is $300M-$1.2B. CRE tech market projected to generate $110-180B in value (McKinsey), but this slice is niche. |
| Existing solutions quality | 4 | Most syndicators use Excel. Existing CRE AI tools (Reonomy, Cherre, PropRise) focus on data/sourcing, not deal screening from uploaded documents. Big gap in document-to-underwriting workflow. |
| Willingness to pay | 4 | CRE operators spend $2K-10K/yr on various tools already. $500-2K/mo is justified if it saves 5-10 hours/week of analyst time at $50+/hr. One better deal per year pays for the tool 10x over. |
| Buildability in 6 days | 3 | Document parsing (OCR + extraction from messy PDFs) is harder than it looks. Rent rolls and T12s come in wildly inconsistent formats. Needs structured data extraction pipeline. Doable in 6-8 weeks, tight for 6 days. |
| Founder fit | 1 | No CRE domain expertise. No CV relevance. Pure document AI play. |
| Timing | 4 | CRE deals are picking up again after 2023-2024 downturn. 92% of CRE teams piloting AI but only 5% achieving goals (JLL 2025). Market is ready, tools aren't. |
| Growth mechanics | 3 | Syndicator communities are tight-knit. Deal summaries can be shared with investors (built-in distribution). Podcast/YouTube CRE community is large and engaged. |
| Defensibility | 3 | Data flywheel: more deals processed = better extraction models. Domain-specific training data is hard to replicate. But ultimately LLM-powered, so vulnerable to well-funded entrants. |
| TOTAL | 33/50 |
Riskiest Assumption Test¶
THE assumption: Your AI can reliably extract structured financial data from the messy, inconsistent formats of rent rolls, OMs, and T12 statements with enough accuracy that syndicators trust the output without manually re-checking everything.
Test in 1 week with $0: Ask 5 CRE syndicators (find them on BiggerPockets forums, CRE Twitter/X) to send you 3 real (redacted) deal packages each. Run them through GPT-4 with a carefully crafted extraction prompt. Measure accuracy against their manual spreadsheets. Show them the output. Ask: "Would you trust this enough to make a go/no-go screening decision?"
Evidence bar: - PROCEED: 85%+ extraction accuracy on key financial metrics (NOI, cap rate, DSCR) across at least 10 of 15 documents. At least 3 of 5 syndicators say "yes, I'd use this for initial screening." - KILL: Below 70% accuracy, or syndicators say "close but I'd still need to re-do the spreadsheet manually anyway."
Pre-Mortem (18 months out, it failed)¶
Most likely failure: Document extraction accuracy plateaus at 80-85% for edge cases (handwritten notes on rent rolls, scanned PDFs, non-standard T12 formats). Customers tolerate errors for the first month, then churn because "I still have to check everything." The product becomes a novelty rather than a trusted tool.
Second most likely: CRE market cycles into another downturn. Deal flow dries up, syndicators aren't screening deals, and your product becomes shelf-ware. Customer acquisition stalls because the target market has shrunk.
Early warning signs: Support tickets dominated by "the numbers are wrong." Usage drops after initial novelty period. Customers downgrade from $2K to $500 tier. Average session time is under 2 minutes (they glance and leave).
Decision: GO (confidence: medium)¶
Strong problem-severity and willingness-to-pay. The timing is right with CRE recovery. Main risks are technical (document extraction accuracy) and team fit (no CRE domain expertise). Would be a strong GO if the team had one person with CRE background. Consider finding a CRE-experienced co-founder or advisor.
3. AI Healthcare Voice Agent¶
Concept: AI phone calls for prior auth/insurance verification for small practices. $500-$1,500/mo. 8 week MVP + HIPAA compliance.
YC Scorecard¶
| Criterion | Score | Rationale |
|---|---|---|
| Problem severity | 5 | Hair-on-fire. Prior auth is the #1 administrative burden in healthcare. Staff spend 14+ hours/week on hold with payers. $450B annual admin crisis (Prosper AI's framing). Practices hire dedicated staff just for this. |
| Problem frequency | 5 | Multiple calls per day, every day. Every patient visit can trigger a prior auth call. |
| Market size | 5 | 250K+ physician practices in the US. Healthcare admin is a $450B problem. Voice AI in healthcare growing at 37.8% CAGR. Even the small practice segment is massive. |
| Existing solutions quality | 3 | SuperDial ($20M+ raised, $15M Series A), Prosper AI (YC-backed, $5M seed, 4x revenue growth), Infinitus, VoiceCare AI (Mayo Clinic pilot) are all attacking this. Solutions exist but market is far from saturated, especially for small practices. |
| Willingness to pay | 5 | A single full-time prior auth employee costs $35-50K/yr. $500-1,500/mo ($6-18K/yr) is a no-brainer replacement. Clear ROI calculation that sells itself. |
| Buildability in 6 days | 1 | HIPAA compliance is a multi-month effort (BAAs, encryption, audit logging, breach notification procedures). Voice AI requires telephony integration, IVR navigation, real-time speech processing. 8 weeks is aggressive; realistically 12-16 weeks. Not 6 days. |
| Founder fit | 1 | Zero healthcare domain expertise. No voice AI experience. No HIPAA background. Jetson/CV skills are irrelevant here. |
| Timing | 5 | Perfect timing. CMS prior auth reforms taking effect 2026. Voice AI accuracy has reached enterprise-ready levels. Prosper AI's 4x revenue growth proves demand. Half of US hospitals plan voice AI by 2026. |
| Growth mechanics | 3 | Practice managers talk to each other. Medical conferences, physician networks. But healthcare sales cycles are slow and trust-based. Not viral. |
| Defensibility | 2 | Voice AI tech is commoditizing (Retell AI, Bland AI, Vapi provide platforms). Payer-specific IVR navigation knowledge is a moat but takes time to build. SuperDial and Prosper have 12-18 month head start. |
| TOTAL | 35/50 |
Riskiest Assumption Test¶
THE assumption: A team with zero healthcare experience can build a HIPAA-compliant voice agent that reliably navigates payer IVR systems and conducts live conversations with insurance reps accurately enough that small practices trust it with their revenue cycle.
Test in 1 week with $0: Call 10 small medical practices (find them on Google Maps). Ask office managers: "How many hours per week does your staff spend on hold with insurance companies? What would you pay to eliminate that? Would you trust an AI to make those calls?" Record pain level and price sensitivity. Separately, manually call 5 different payer phone lines and document the IVR trees, hold times, and conversation patterns.
Evidence bar: - PROCEED: 8/10 office managers confirm 10+ hours/week on payer calls. At least 6/10 say they'd pay $500+/mo. Payer IVR trees are navigable with structured logic (not unpredictable human gatekeepers). - KILL: Office managers say "we already have a system" or "I'd never trust AI with insurance calls." Payer phone systems require complex human judgment that can't be scripted.
Pre-Mortem (18 months out, it failed)¶
Most likely failure: HIPAA compliance and healthcare sales cycles consume all bandwidth. The team spends 6 months on compliance infrastructure and another 6 months trying to land the first 10 customers through slow, trust-based healthcare sales. Meanwhile, SuperDial and Prosper AI (who already have HIPAA compliance, payer integrations, and customer trust) expand into the small practice segment and eat the market. You run out of runway before reaching meaningful revenue.
Second most likely: The voice agent works for 60% of payer calls but fails on the other 40% (unusual hold procedures, payer reps who don't follow scripts, edge cases in benefits verification). Practices can't rely on it for all calls, so they keep their staff AND pay for your tool -- creating a "nice to have" rather than a "must have." Churn follows.
Early warning signs: HIPAA compliance takes longer than 8 weeks. First 5 customers take more than 3 months to close. Call success rate below 70%. Practices use it for simple calls but still handle complex ones manually.
Decision: KILL (confidence: medium-high)¶
Despite being the largest and most painful market on this list, this is the wrong opportunity for this team. The founder-market fit score of 1/5 is disqualifying. HIPAA compliance alone will consume 30-50% of the team's capacity for months. SuperDial ($20M+ raised) and Prosper AI (YC-backed, 4x revenue growth) are 12-18 months ahead with healthcare-native teams. The technical moat here is healthcare domain knowledge and payer system expertise, not engineering -- and this team has neither.
4. AI Legal Document Review¶
Concept: Contract review for in-house teams at mid-market companies. $500-$1,500/mo. Harvey ($8B-$11B valuation) only serves BigLaw. 6-8 week MVP.
YC Scorecard¶
| Criterion | Score | Rationale |
|---|---|---|
| Problem severity | 3 | Mid-market in-house teams review contracts regularly but it's manageable. Not hair-on-fire -- more like a persistent annoyance. Companies aren't losing money daily from slow contract review. |
| Problem frequency | 4 | In-house counsel reviews contracts daily. NDAs, vendor agreements, customer contracts -- steady stream. |
| Market size | 4 | ~40K mid-market companies in the US with in-house legal teams. Legal tech market is large. Harvey alone at $195M ARR proves willingness to spend on AI legal tools. |
| Existing solutions quality | 3 | Harvey ($11B valuation, $195M ARR, 100K+ lawyers) dominates BigLaw but is enterprise-priced. Legora ($1.8B valuation), Ironclad, Juro, ContractPodAi serve CLM. Gap exists for mid-market, but it's narrowing. |
| Willingness to pay | 3 | Mid-market in-house teams have tighter budgets than BigLaw. $500-1,500/mo is reasonable but you're competing against "just have a junior associate do it." Harvey's success at $1,440/user/year for BigLaw doesn't translate to mid-market willingness. |
| Buildability in 6 days | 3 | Contract review requires fine-tuned understanding of legal clauses, risk flagging, redlining. LLM-based MVP is buildable but accuracy requirements are high -- legal errors have consequences. |
| Founder fit | 1 | No legal domain expertise. No NLP/document AI background beyond CV. Zero unfair advantage. |
| Timing | 4 | Harvey validating the market at $11B is a strong signal. Mid-market is underserved. But Harvey is expanding downmarket -- this window may close. |
| Growth mechanics | 2 | Legal is conservative. Sales cycle is relationship-driven. No natural virality. GC-to-GC referrals are possible but slow. |
| Defensibility | 2 | LLM-based contract review is commoditizing. Harvey, with $500M+ in funding, will eventually move downmarket. Your moat is "cheaper" which is not a moat. |
| TOTAL | 29/50 |
Riskiest Assumption Test¶
THE assumption: Mid-market in-house legal teams will adopt a contract review tool from an unknown startup when (a) legal errors carry real liability, (b) Harvey is expanding downmarket, and (c) the team has zero legal credibility.
Test in 1 week with $0: Message 20 in-house General Counsels at mid-market companies (find them on LinkedIn, filter by company size 200-2,000 employees). Ask: "How do you handle contract review today? What tools do you use? Would you try an AI contract reviewer at $500/mo? What would make you trust it?" Also, run 10 real NDAs and vendor agreements through Claude/GPT-4 with contract review prompts and evaluate output quality.
Evidence bar: - PROCEED: 5+ GCs take the call. At least 3 say they'd trial an AI tool. LLM output correctly identifies 90%+ of key risk clauses in test contracts. - KILL: GCs won't take the call or say "I'd never trust AI for contracts without a legal team behind it." LLM misses critical clauses or hallucinates provisions.
Pre-Mortem (18 months out, it failed)¶
Most likely failure: Harvey announces a mid-market tier at $500/user/month. With their $500M+ war chest, brand credibility from BigLaw adoption, and 100K+ lawyer user base, they obliterate your positioning overnight. Your "Harvey for the mid-market" pitch becomes "worse Harvey for the mid-market."
Second most likely: Legal buyers require SOC 2, security reviews, and insurance/indemnification for AI-generated legal analysis. The sales cycle stretches to 6-9 months as legal teams demand extensive security documentation, pilot periods, and approval from risk committees. You burn runway on compliance rather than product.
Early warning signs: Harvey announces mid-market pricing. Prospects ask "what happens if your AI misses a clause and we get sued?" Sales cycle exceeds 90 days. Pilot customers use it for low-stakes contracts only.
Decision: KILL (confidence: high)¶
Harvey raised $500M+ and is already talking about moving downmarket. At $11B valuation with 3.5x ARR growth, they have essentially infinite resources to capture the mid-market when they choose to. Competing on "cheaper Harvey" against the actual Harvey is a losing strategy. The team has no legal domain expertise, no legal industry relationships, and no credibility with risk-averse legal buyers. There are better opportunities on this list.
5. AI Sales Call Intelligence for SMB¶
Concept: Gong for SMBs at $79/user/mo. 4-6 week MVP.
YC Scorecard¶
| Criterion | Score | Rationale |
|---|---|---|
| Problem severity | 3 | Sales teams want call insights but SMBs often have 2-5 reps -- the pain of not having call intelligence is manageable at that scale. |
| Problem frequency | 5 | Every sales call, every day. High-frequency usage pattern. |
| Market size | 4 | Millions of SMBs with sales teams. Gong at $332M ARR proves the market. SMB segment is larger by count but smaller by spend. |
| Existing solutions quality | 2 | Market is saturated. Avoma ($19/user/mo), Fireflies, Otter.ai, Chorus, Jiminny, Claap, Sybill -- all serve SMBs already. At least 15+ credible alternatives to Gong exist at lower price points. |
| Willingness to pay | 3 | $79/user/mo is ~$950/user/year. SMBs are price-sensitive. Avoma at $19/user/mo and Fireflies at $18/user/mo set the market floor far below $79. |
| Buildability in 6 days | 4 | Transcription APIs (Deepgram, AssemblyAI) + LLM summarization + basic CRM integration. Technically straightforward MVP. |
| Founder fit | 1 | No sales tech expertise. No voice/NLP background. Jetson/CV completely irrelevant. |
| Timing | 2 | Late to this market. The conversation intelligence wave crested 2021-2023. Market is now mature and consolidating (Gong's valuation dropped from $7.25B to $4.5B in secondary transactions). |
| Growth mechanics | 3 | Product-led growth is possible (free tier, meeting bot joins calls). But the space is noisy and customer acquisition costs are high. |
| Defensibility | 1 | Commodity market. Transcription and summarization are fully commoditized via API. No proprietary data, no switching costs, no network effects. Easiest product on this list to replicate. |
| TOTAL | 28/50 |
Riskiest Assumption Test¶
THE assumption: SMBs will pay $79/user/mo for your call intelligence tool when Avoma costs $19/user/mo and Fireflies costs $18/user/mo with similar core features.
Test in 1 week with $0: Create a landing page with $79/user/mo pricing and feature comparison against Avoma/Fireflies/Otter. Drive traffic from ProductHunt, LinkedIn, and sales communities. Measure signup intent. Also, interview 10 SMB sales managers: "Do you use call recording? If yes, what? If no, why not? Would you pay $79/user/mo?"
Evidence bar: - PROCEED: 50+ signups from 1,000 page views. At least 5 SMB managers say "I'd switch from [current tool] for a meaningfully better experience." Clear differentiation identified that justifies 4x price premium. - KILL: Prospects consistently say "why would I pay $79 when Fireflies is $18?" No clear differentiation emerges. Signups below 20 from 1,000 views.
Pre-Mortem (18 months out, it failed)¶
Most likely failure: You launch into a red ocean with 15+ established competitors and can't differentiate. Customer acquisition cost exceeds $500/customer while LTV is under $1,000 (given SMB churn rates of 5-8%/mo). Unit economics never work. You're spending more to acquire customers than they'll ever pay you.
Second most likely: Gong launches an official SMB tier at $99/user/mo, leveraging their brand, data moat (billions of analyzed conversations), and AI models trained on 10x more data than you'll ever have. Your "Gong for SMBs" pitch dies the day Gong decides to serve SMBs.
Early warning signs: CAC exceeds $300 in first 3 months. Monthly churn above 6%. Prospects compare you to free Otter.ai. Feature requests are all "can you do what Gong does?" No organic/word-of-mouth growth.
Decision: KILL (confidence: high)¶
This is the worst opportunity on the list. The market is oversaturated with 15+ competitors at every price point. The $79/mo pricing is 4x what established alternatives charge. The team has zero sales tech expertise. Gong could launch an SMB tier at any time. There is no differentiation, no moat, and no founder-market fit. Hard kill.
6. Privacy-First Edge Camera Analytics¶
Concept: Jetson box, edge-processed video, no cloud upload. $80/camera/mo + hardware margin. 6-8 week MVP.
YC Scorecard¶
| Criterion | Score | Rationale |
|---|---|---|
| Problem severity | 4 | Privacy regulations (GDPR, CCPA, BIPA) are creating real compliance risk for businesses using cloud video analytics. Healthcare, education, and government have strict data residency requirements. Fines are material. |
| Problem frequency | 5 | Video analytics run 24/7. Every camera, every frame, continuous processing. |
| Market size | 4 | Edge AI market $19.1B in 2024, growing to $64.7B by 2032 at 16.5% CAGR. Video analytics is the largest application segment. |
| Existing solutions quality | 3 | Rhombus, Verkada, and Axis exist but most are cloud-dependent. True edge-only, privacy-first solutions are rare. Hailo-8 chips are emerging but few turnkey solutions exist for end users. |
| Willingness to pay | 3 | $80/camera/mo is reasonable for enterprise but a tough sell for SMBs with 2-4 cameras. Hardware margin helps but increases sales friction. Verkada charges $200-400/camera/year for cloud. |
| Buildability in 6 days | 4 | Team knows Jetson inside and out. Object detection, people counting, zone intrusion -- well-understood CV problems. Can have a working demo in days. |
| Founder fit | 5 | PERFECT fit. Jetson + CV is literally what this team does. This is the unfair advantage opportunity. Hardware + software integration is a barrier that protects against pure-software competitors. |
| Timing | 4 | Privacy regulations tightening globally. GDPR enforcement increasing. US states passing privacy laws (13 states by 2025). Edge compute cost/performance improving rapidly (Jetson AGX Thor, 7.5x performance gain). |
| Growth mechanics | 2 | Hardware sales are inherently non-viral. Channel partner strategy needed (security integrators, MSPs). Slow ramp. |
| Defensibility | 4 | Hardware-software integration is a genuine moat. Edge deployment expertise, optimized models for Jetson, and customer-specific training data create switching costs. Pure software companies can't easily replicate the full-stack experience. |
| TOTAL | 38/50 |
Riskiest Assumption Test¶
THE assumption: Enough buyers specifically need edge-only, no-cloud video analytics (rather than simply cloud analytics that's "compliant enough") to sustain a business. In other words: is "privacy-first, no cloud" a buying criterion, or a nice-to-have that doesn't drive purchasing decisions?
Test in 1 week with $0: Contact 15 potential buyers across 3 verticals (healthcare facilities managers, school district IT directors, government building managers). Ask: "Do you currently use video analytics? Are you blocked by privacy/data residency concerns? Would an edge-only solution that never sends video to the cloud unlock a purchase you're currently unable to make?" Also post in relevant forums (r/sysadmin, r/homelab, security integrator forums).
Evidence bar: - PROCEED: 5+ of 15 say "yes, we've been blocked from deploying cloud video analytics specifically because of privacy/data residency policies." At least 3 describe specific budget allocated for this. - KILL: Prospects say "privacy is nice but not what's stopping us" or "we just use Verkada and our compliance team signed off."
Pre-Mortem (18 months out, it failed)¶
Most likely failure: "Privacy-first" is a feature, not a product. Verkada and Rhombus add "edge processing mode" or "local storage option" to their existing platforms -- getting 80% of the privacy benefit while keeping their cloud dashboards, integrations, and brand trust. Your wedge disappears. Buyers prefer a known vendor with an edge option over an unknown startup that only does edge.
Second most likely: Hardware logistics kill you. Managing inventory, shipping Jetson boxes, handling hardware failures/RMAs, and supporting diverse camera ecosystems consumes the team. Hardware margins are thin. You become a hardware support company instead of an AI company. Software revenue per unit doesn't justify the operational overhead.
Early warning signs: Prospects ask "can it also do cloud?" more than they ask about privacy. Hardware return rate exceeds 5%. Time spent on hardware support exceeds time spent on AI/software development. Verkada announces an edge mode.
Decision: GO (confidence: medium-high)¶
This is the highest founder-fit opportunity on the list. The team's Jetson/CV expertise is a genuine unfair advantage. The market is large and growing. The key risk is whether "privacy-first" is a strong enough wedge to build a company around, or just a feature that incumbents will bolt on. Mitigate by starting with verticals where edge-only is a hard requirement (government, healthcare, education with strict data policies) rather than a preference. Consider combining with opportunity #8 or #10 for a more specific wedge.
7. AI Visual Inspection for Manufacturing¶
Concept: Edge defect detection for food/pharma. $4K hardware + $1,500/mo per line. 8-10 week MVP.
YC Scorecard¶
| Criterion | Score | Rationale |
|---|---|---|
| Problem severity | 5 | Defective products in food/pharma = regulatory shutdown, recalls, lawsuits. FDA enforcement is real. A single contamination event can cost millions. Compliance is existential. |
| Problem frequency | 5 | Every product on every line, 24/7. Continuous inspection at thousands of items per minute. |
| Market size | 4 | AI defect detection market $3.7B in 2025, growing to $6.6B by 2034. Food and pharma are the highest-value segments due to regulatory pressure. |
| Existing solutions quality | 3 | Cognex, Keyence, OMRON dominate with expensive, rigid systems ($50K-$500K per line). Overview.ai, Landing AI, Jidoka are AI-native but still enterprise-focused. Gap exists for simpler, more affordable systems. |
| Willingness to pay | 5 | $4K hardware + $1,500/mo is a fraction of existing systems ($50K+). If it prevents one recall or FDA warning letter, it pays for itself 100x. Decision-makers have budget authority for quality/compliance. |
| Buildability in 6 days | 2 | Requires line-specific camera rigs, lighting setups, product-specific model training, integration with reject mechanisms (air jets, diverters). Each deployment is partially custom. 8-10 weeks for first deployment is realistic but tight. |
| Founder fit | 5 | Jetson edge deployment + computer vision is the exact skill set. Defect detection is a canonical CV problem. Team can build optimized inference pipelines for real-time line speeds on Jetson hardware. |
| Timing | 4 | FDA FSMA enforcement tightening. Industry 4.0 adoption accelerating. Edge compute now powerful enough for real-time inspection at line speed. Jetson AGX Thor enables models that weren't possible 2 years ago. |
| Growth mechanics | 2 | Enterprise sales with long cycles. Manufacturing is conservative. Each deployment is partially custom. No virality. But successful deployments lead to multi-line expansion within the same customer. |
| Defensibility | 4 | Domain-specific training data for food/pharma defects is proprietary and hard to replicate. Edge deployment expertise + manufacturing integration knowledge creates a service moat. Each customer deployment deepens your model library. |
| TOTAL | 39/50 |
Riskiest Assumption Test¶
THE assumption: You can achieve production-grade defect detection accuracy (99.5%+ for food/pharma) with a $4K hardware setup and generic training, rather than requiring the $50K+ custom-engineered systems that Cognex/Keyence sell.
Test in 1 week with $0: Contact 5 food/pharma quality managers. Ask: "What defects are you trying to catch? What's your current false positive/negative rate? What does your current inspection system cost? Would you pilot a $4K system that promises comparable accuracy?" Get 10-20 sample defect images per prospect. Run them through a pre-trained detection model on a Jetson to baseline accuracy.
Evidence bar: - PROCEED: Achieve 95%+ detection rate on sample images with a generic model. At least 3 of 5 quality managers agree to a paid pilot ($2K-5K for 2-week trial). Current systems cost 10x+ more. - KILL: Detection accuracy below 85% on sample images. Quality managers say "we need 99.9% and nothing less." Defect types require specialized imaging (X-ray, hyperspectral) that a standard camera can't capture.
Pre-Mortem (18 months out, it failed)¶
Most likely failure: Each customer deployment requires extensive custom work -- different products, different defect types, different line speeds, different lighting conditions. What looks like a SaaS business becomes a services business. You're doing $150K custom integration projects, not scaling $1,500/mo subscriptions. The team is stretched across 5 customer projects simultaneously, and none are fully automated.
Second most likely: Food/pharma quality managers require extensive validation before production deployment (3-6 month validation protocols in pharma, statistical process control documentation, compliance paperwork). Sales cycles extend to 9-12 months. You burn runway waiting for procurement and quality teams to approve your system.
Early warning signs: First 3 deployments each require more than 2 weeks of on-site customization. Model accuracy varies wildly between products/facilities. Customers ask for "just a few more tweaks" indefinitely. Revenue is project-based rather than recurring.
Decision: GO (confidence: medium)¶
Highest-scoring opportunity on the scorecard. Perfect founder fit and massive willingness-to-pay. The key risk is the services trap -- each deployment becoming a custom project rather than a repeatable product. Mitigate by choosing ONE extremely narrow product type to start (e.g., pill blister pack inspection, or produce sorting) and only expand after the first product is fully standardized. Start with food (faster sales cycles than pharma). The $4K + $1,500/mo pricing is extremely compelling vs. $50K+ incumbents.
8. Perimeter Intrusion Detection¶
Concept: Edge AI for solar farms, substations. $10K-$50K/site + monthly SaaS. 8 week MVP.
YC Scorecard¶
| Criterion | Score | Rationale |
|---|---|---|
| Problem severity | 4 | Copper theft from solar farms and substations is a growing, material problem. A single intrusion can cause $100K+ in damage and weeks of downtime. Insurance requires security measures. |
| Problem frequency | 3 | Intrusion attempts are episodic, not daily. But monitoring must be 24/7. The system runs continuously even if events are infrequent. |
| Market size | 4 | Perimeter intrusion detection market $23.2B in 2025, growing to $88.2B by 2034 at 15.7% CAGR. Solar farm buildout is accelerating (IRA incentives). Thousands of new sites per year. |
| Existing solutions quality | 3 | Cloudastructure deploying solar-powered AI security enclosures across multiple states in Feb 2026. Traditional providers (Axis, Hikvision, Bosch) offer analytics but limited AI edge processing. Market is active but not saturated for AI-native solutions. |
| Willingness to pay | 5 | $10K-$50K per site is small relative to asset value ($5M-$100M+ per solar farm/substation). Insurance requirements and theft prevention justify the spend. Utilities and energy companies have deep pockets. |
| Buildability in 6 days | 3 | Perimeter detection (person/vehicle classification, zone intrusion, night vision) is well-understood CV. Edge deployment on Jetson is team strength. But outdoor ruggedization, power management, cellular connectivity for remote sites add complexity. |
| Founder fit | 5 | Perfect Jetson/CV fit. Edge deployment in remote locations with no cloud connectivity is exactly where Jetson shines. Low-power, ruggedized, autonomous operation. |
| Timing | 5 | Solar farm buildout accelerating under IRA. Critical infrastructure security regulations tightening. Copper theft epidemic. Cloudastructure raising and deploying in Feb 2026 validates the market. NERC CIP compliance requirements. |
| Growth mechanics | 3 | Energy companies own dozens/hundreds of sites. Land-and-expand within a single utility. Solar developers spec security into new projects. Channel through EPC contractors. |
| Defensibility | 3 | Site-specific deployment experience and relationships create switching costs. But the core CV technology (perimeter detection) is well-known. Moat comes from ruggedized hardware design, remote management platform, and customer relationships -- not from AI novelty. |
| TOTAL | 38/50 |
Riskiest Assumption Test¶
THE assumption: Solar farm and substation operators will buy a perimeter intrusion detection system from a 5-person startup rather than from established security companies (Axis, Bosch) or newer funded players (Cloudastructure) who have track records and insurance/compliance credibility.
Test in 1 week with $0: Call 10 solar farm operators and 5 utility substation managers. Ask: "What security system do you currently use? What's your biggest security challenge? Have you experienced theft or intrusion? What would you pay for reliable AI-powered detection? Would you pilot a system from a new vendor?" Also call 3 solar EPC contractors: "Do you spec security systems into new projects? How do you choose vendors?"
Evidence bar: - PROCEED: 5+ operators confirm active theft/intrusion problems. At least 3 don't have AI-powered detection and express interest. 1+ EPC contractor willing to spec your system into an upcoming project. - KILL: Operators say "we already have Axis/Bosch and it works fine" or "security isn't a priority." EPC contractors have locked-in vendor relationships.
Pre-Mortem (18 months out, it failed)¶
Most likely failure: Enterprise sales cycles in energy/utilities are 6-12 months. Procurement requires vendor qualification, insurance documentation, field testing, and pilot programs before purchase orders. You close 3-5 sites in 18 months but burn $500K+ in runway doing it. Revenue doesn't cover the cost of doing business. Cloudastructure, with $20M+ in funding, offers similar solutions with better credentials.
Second most likely: False positive rate in outdoor environments (animals, weather, vegetation movement) is higher than indoor environments. Customers get alarm fatigue. The system that was supposed to replace guards ends up requiring guards to triage AI alerts. Value proposition erodes.
Early warning signs: First pilot takes more than 3 months to close. False positive rate exceeds 5% in field conditions. Prospects require insurance/bonding you can't provide. RFP processes require 3+ years of company history.
Decision: GO (confidence: medium)¶
Strong founder fit and excellent market timing. The deal sizes ($10K-$50K + monthly) create attractive unit economics. The main risk is enterprise sales velocity -- energy companies buy slowly. Mitigate by targeting smaller, faster-moving solar farm developers rather than large utilities. Consider partnering with a solar EPC contractor as a channel. The combination of #6 + #8 (edge camera platform + perimeter detection vertical) is particularly compelling.
9. SMB Predictive Maintenance¶
Concept: Plug-and-play machine monitors for small factories. $450 hardware + $150/machine/mo. 8-10 week MVP.
YC Scorecard¶
| Criterion | Score | Rationale |
|---|---|---|
| Problem severity | 3 | Unplanned downtime is expensive but small factories often accept it as "cost of doing business." Not hair-on-fire unless equipment failures are frequent and catastrophic. |
| Problem frequency | 3 | Machine vibration/temperature monitoring is continuous, but actionable alerts are weekly/monthly. The value is in the rare prediction that prevents a breakdown. |
| Market size | 4 | Predictive maintenance market $14.3B in 2025, growing to $98B by 2033 at 27.9% CAGR. Massive market, but dominated by enterprise. SMB segment is harder to size. |
| Existing solutions quality | 3 | Enterprise solutions from Augury, Senseye, SparkCognition are mature. SMB-focused options are fewer but emerging. Edge-focused newcomers emphasize hardware-agnostic software. |
| Willingness to pay | 3 | $150/machine/mo = $1,800/yr/machine. A small factory with 10 machines pays $18K/yr. ROI depends on downtime cost, which varies wildly. Some SMBs will do the math and say "I'll just keep a spare motor." |
| Buildability in 6 days | 2 | Requires custom sensor hardware (vibration, temperature, current), wireless connectivity, edge processing, anomaly detection models, and a dashboard. Hardware development is 8-10 weeks minimum. Sensor integration is finicky. |
| Founder fit | 3 | Jetson experience helps with edge processing, but predictive maintenance is more about vibration analysis and time-series anomaly detection than computer vision. The team needs to learn new sensor modalities. Partial fit. |
| Timing | 3 | Industry 4.0 tailwinds exist but SMBs are slower to adopt than enterprises. McKinsey says 65% of large manufacturers have deployed IoT sensors -- but SMBs are years behind. Market is growing but SMB adoption is early. |
| Growth mechanics | 2 | No virality. Per-machine pricing limits land-and-expand. Small factories don't talk to each other at conferences. Trade associations and distributors are channels but slow. |
| Defensibility | 2 | Sensor hardware is commoditized (off-the-shelf vibration sensors). Anomaly detection models are well-understood. Augury has years of training data. No strong moat for a new entrant. |
| TOTAL | 28/50 |
Riskiest Assumption Test¶
THE assumption: Small factory owners will pay $150/machine/month for predictive maintenance rather than continuing with reactive maintenance (fix it when it breaks) or simple preventive maintenance (change the oil every 3 months).
Test in 1 week with $0: Visit 5 small factories (machine shops, food producers, small manufacturers) in your area. Talk to the maintenance manager or owner. Ask: "How often do machines break down unexpectedly? What does a breakdown cost you? How do you decide when to maintain equipment? Would you pay $150/machine/month to predict failures before they happen?" Ask to see their maintenance logs.
Evidence bar: - PROCEED: 3+ of 5 report monthly unplanned downtime costing $5K+. At least 2 say "yes, I'd pay $150/machine/month if it actually works." Current maintenance approach is purely reactive. - KILL: Factory owners say "breakdowns happen but it's manageable" or "$150/machine is more than I spend on maintenance today." Preventive maintenance schedules already catch most issues.
Pre-Mortem (18 months out, it failed)¶
Most likely failure: Small factory owners are the hardest customers in B2B software. They don't respond to emails, don't attend webinars, don't browse LinkedIn. Sales requires in-person visits and relationship building. Customer acquisition cost exceeds $2,000/customer. With $150/machine and an average of 5 machines per customer, LTV is maybe $9K over 12 months. Unit economics are upside down.
Second most likely: The sensor hardware works in the lab but fails in the field. Factory environments are harsh -- vibration, dust, heat, electromagnetic interference. The first 10 deployments require constant hardware troubleshooting. The team becomes a hardware support operation instead of an AI company.
Early warning signs: First 5 customers take more than 2 months each to close. Hardware failure rate exceeds 10% in first 6 months. Customers don't check the dashboard regularly. Anomaly detection generates false alarms that erode trust.
Decision: KILL (confidence: medium)¶
The SMB predictive maintenance market sounds large but the per-customer economics are challenging. Small factory owners are difficult to reach and sell to. The team's CV expertise is only partially relevant (this is more vibration/time-series than vision). Hardware development adds months of complexity. The $150/machine/mo price point may not survive contact with real SMB buyers. Better opportunities exist on this list.
10. Construction Site Safety¶
Concept: PPE detection for small contractors. $600 BOM + $499/mo. 6-8 week MVP.
YC Scorecard¶
| Criterion | Score | Rationale |
|---|---|---|
| Problem severity | 4 | OSHA violations cost $1,221-$165,514 per incident. Construction is the most dangerous industry in the US. But small contractors often accept risk rather than invest in technology. Pain is regulatory/insurance-driven rather than self-motivated. |
| Problem frequency | 5 | Every worker, every day, every site. PPE compliance must be monitored continuously during work hours. |
| Market size | 3 | Construction site AI camera market $1.37B in 2024, growing at 17.8% CAGR. But "small contractors" is a subset that's harder to monetize. 750K+ construction companies in the US, but most are very small (under 10 employees). |
| Existing solutions quality | 3 | CompScience, Spot AI, viAct, TrueLook, and Camect all do PPE detection. The market is getting crowded for enterprise construction. Small contractors are underserved but possibly because they don't buy tech. |
| Willingness to pay | 3 | $499/mo is real money for a 10-person contractor making $1-3M/year. OSHA fines are a motivator but most small contractors haven't been fined and consider it unlikely. Insurance discounts could offset cost but that requires insurer partnerships. |
| Buildability in 6 days | 4 | PPE detection (hard hat, vest, harness) is a well-solved CV problem. Pre-trained models exist. Running on Jetson with a camera feed is straightforward for this team. Alert system and dashboard are quick to build. |
| Founder fit | 5 | Core Jetson + CV competency. PPE detection is a canonical computer vision application. Edge deployment for construction sites (outdoor, variable conditions, no reliable internet) is perfect for Jetson expertise. |
| Timing | 4 | OSHA enforcement increasing. AI-powered compliance documentation becoming expected. Insurance companies starting to require safety tech. Construction site monitoring market growing at 16% CAGR. |
| Growth mechanics | 2 | Small contractors don't talk tech at industry events. GC-to-subcontractor mandate could work (GC requires subs to use your system). But sales to individual small contractors is door-to-door. |
| Defensibility | 2 | PPE detection is a commodity CV problem. Any team can build it with pre-trained models. The "edge box" differentiator is real but not unique. CompScience and Spot AI have more resources and data. |
| TOTAL | 35/50 |
Riskiest Assumption Test¶
THE assumption: Small contractors (under 50 employees) will pay $499/mo for AI safety monitoring rather than continuing with manual safety inspections and accepting the risk of occasional OSHA fines.
Test in 1 week with $0: Call 15 small construction contractors. Ask: "How do you handle PPE compliance today? Have you ever been fined by OSHA? Do you have a safety manager? What would you pay for automated PPE monitoring? Would a $499/mo system that documents compliance and could reduce your insurance premiums be worth it?" Also call 3 construction insurance brokers: "Do you offer discounts for AI safety monitoring?"
Evidence bar: - PROCEED: 5+ contractors say "I'd pay $499/mo if it lowers my insurance" or "I've been fined and need to prevent it." At least 1 insurance broker confirms premium discounts for AI safety monitoring (10%+ discount would make the ROI clear). - KILL: Contractors say "OSHA won't fine me" or "$499 is too much -- I'll just yell at my guys to wear their hard hats." Insurance brokers say no discount program exists.
Pre-Mortem (18 months out, it failed)¶
Most likely failure: Small contractors are the wrong customer. They're price-sensitive, technology-averse, and have short project timelines (move the camera every 2-4 weeks). The hassle of setting up and moving equipment between job sites makes the product impractical. Your actual buyer is the general contractor or owner's rep who mandates safety for an entire site -- but that's enterprise sales, not SMB.
Second most likely: PPE detection accuracy in real construction site conditions (dust, rain, odd angles, workers partially obscured by equipment) is 85% instead of 99%. Workers get frustrated by false alerts. Safety managers disable the system to avoid the noise. The product sits unused.
Early warning signs: Churn exceeds 10%/month as projects end and contractors don't re-subscribe for the next site. Setup/teardown time exceeds 2 hours per site. False positive rate exceeds 5%. Contractors ask for monthly contracts rather than annual.
Decision: PIVOT (confidence: medium)¶
The technology fits perfectly but the customer segment (small contractors) is wrong. Small contractors are hard to sell to, price-sensitive, and move between sites frequently. PIVOT to selling to general contractors or construction management firms who mandate safety across all subcontractors on their sites. This changes the deal size ($2K-$10K/site/mo), the buyer (VP of Safety at a GC), and the sales motion (enterprise) -- but dramatically improves unit economics and reduces churn. Alternatively, combine with #6 (edge camera platform) and #8 (perimeter detection) for a broader construction site intelligence platform.
Final Power Ranking¶
| Rank | Opportunity | Score | Verdict | Justification |
|---|---|---|---|---|
| 1 | #7 -- AI Visual Inspection for Manufacturing | 39/50 | GO | Highest score, perfect founder fit, and the strongest willingness-to-pay on the list. $4K + $1,500/mo pricing is 10x cheaper than Cognex/Keyence, and FDA compliance makes the problem existential for food/pharma buyers. The services-trap risk is real but manageable by choosing one extremely narrow product type and standardizing before expanding. |
| 2 | #6 -- Privacy-First Edge Camera Analytics | 38/50 | GO | The single best founder-fit opportunity. Jetson + CV is the literal job description. The platform play (edge camera analytics) creates optionality to expand into verticals (#8 perimeter, #10 construction) without starting over. Risk is whether "privacy-first" alone is enough of a wedge or if it needs a vertical application to drive initial sales. |
| 3 | #8 -- Perimeter Intrusion Detection | 38/50 | GO | Same score as #6 but ranked lower because the enterprise sales cycle in energy/utilities is longer and more demanding. Excellent timing with solar farm buildout and the copper theft epidemic. Best pursued as a vertical application of the #6 platform rather than a standalone product. Deal sizes ($10K-$50K) are attractive but slow to close. |
| 4 | #10 -- Construction Site Safety | 35/50 | PIVOT | Strong technology fit but wrong customer segment. Small contractors are a dead end -- pivot to GCs and construction management firms. As a vertical application of #6 (edge camera platform), this becomes much more compelling. The OSHA compliance story and insurance discount angle provide real ROI justification. |
| 5 | #3 -- AI Healthcare Voice Agent | 35/50 | KILL | The biggest market and most painful problem on the list, but fatally mismatched with this team. HIPAA compliance alone would consume months. SuperDial ($20M+) and Prosper AI (YC, 4x growth) are 12-18 months ahead with healthcare-native teams. Would be #1 for a team with healthcare + voice AI experience; it is a trap for a Jetson/CV team. |
| 6 | #2 -- AI CRE Deal Screening | 33/50 | GO (conditional) | Interesting niche with strong pain and willingness-to-pay. Timing is right with CRE recovery. Ranked here because no founder-market fit (no CRE expertise) and document extraction accuracy is an unproven bet. Would jump to top 3 with a CRE-experienced co-founder or advisor. Conditional GO: only proceed if the 1-week test shows 85%+ extraction accuracy. |
| 7 | #4 -- AI Legal Document Review | 29/50 | KILL | Harvey at $11B valuation with $500M+ in funding will eventually move downmarket and destroy any startup in this space. Competing on "cheaper Harvey" is not a strategy. Legal buyers are risk-averse and require compliance credentials this team doesn't have. Zero founder-market fit. |
| 8 | #1 -- AI RFP/Proposal Response Engine | 28/50 | KILL | Incumbents (Loopio, Responsive) are adding AI features. AI-native competitors (AutoRFP.ai, Inventive.ai) are already funded and further along. No founder-market fit. The $299-$499/mo price point targets an awkward middle market. Not the worst idea, but there's no reason THIS team should build it. |
| 9 | #9 -- SMB Predictive Maintenance | 28/50 | KILL | Small factory owners are among the hardest B2B customers to reach and sell to. Hardware development adds months. The team's CV expertise is only partially relevant (vibration analysis is a different domain). Per-machine economics at $150/mo don't justify customer acquisition costs. Better enterprise solutions already exist from Augury and Senseye. |
| 10 | #5 -- AI Sales Call Intelligence for SMB | 28/50 | KILL | Worst opportunity on the list despite the tied score. Red ocean with 15+ established competitors at every price point (Avoma at $19/mo, Fireflies at $18/mo). Zero differentiation. Zero founder fit. Zero moat. Gong could launch an SMB tier any day. Transcription + summarization is fully commoditized via API. Nothing about this team's skills gives any advantage here. |
Combination Ideas¶
Recommended Combinations¶
COMBO A: Edge AI Safety & Security Platform (Best Combination)¶
Combine: #6 (Edge Camera) + #8 (Perimeter Detection) + #10 (Construction Safety)
Build a single edge camera analytics platform on Jetson, then deploy vertical-specific AI models as software modules. The hardware is the same Jetson box with ruggedized enclosure. The software layer swaps between: - Module 1: Perimeter intrusion detection (solar farms, substations, warehouses) - Module 2: Construction site safety (PPE detection, zone violations, equipment tracking) - Module 3: General analytics (people counting, vehicle tracking, occupancy)
Why this works: - One hardware SKU, one deployment playbook, one support process. - Each vertical becomes a different software license on the same box. - Cross-vertical learnings (outdoor detection in all weather) improve all modules. - The team builds once (Jetson platform + camera integration) and sells three times. - Pricing flexes from $499/mo (single-module SMB) to $50K/site (enterprise multi-module). - The platform creates defensibility that a single-vertical point solution lacks.
Estimated MVP: 8-10 weeks for platform + first vertical (start with perimeter detection because the buyer has the biggest budget and clearest ROI).
COMBO B: Edge AI Manufacturing Quality Platform¶
Combine: #7 (Visual Inspection) + #6 (Edge Camera) as the underlying platform
Build visual inspection as the first "killer app" on the edge camera platform. The same Jetson hardware, camera integration, and edge processing pipeline serves both: - Application 1: Production line defect detection (food/pharma) -- $1,500/mo per line - Application 2: Facility monitoring (access control, safety compliance, inventory tracking) -- $80/camera/mo
Why this works: - A food manufacturing facility that buys your inspection system for the production line also needs security cameras, safety monitoring, and access control. One vendor, one platform, one support contract. - Expands TAM within each customer from a single production line to the entire facility. - Manufacturing buyers prefer vendors who can solve multiple problems (fewer vendor relationships to manage).
Estimated MVP: 10-12 weeks (inspection app is more complex than surveillance).
COMBO C: AI Document Intelligence (Software Play)¶
Combine: #1 (RFP) + #2 (CRE) + #4 (Legal)
If the team wanted a pure software play, these three share a common core: extracting structured information from unstructured documents and generating compliant output. A single "document AI" engine could power: - RFP response drafting from past proposals - CRE deal screening from rent rolls and offering memorandums - Contract review and risk flagging
Why this is ranked last: While technically elegant, this combines three ideas that all scored low on founder-fit. The team has no expertise in any of these domains. It's three bad fits duct-taped together. Not recommended.
Strategic Recommendation¶
Primary path: COMBO A (Edge AI Safety & Security Platform)
This is the highest-confidence recommendation because:
- Founder fit is maximal. Every component plays to Jetson + CV expertise.
- Platform economics beat point solutions. One hardware build, multiple software revenue streams.
- Market timing is perfect. Solar buildout (IRA), construction monitoring ($5.1B by 2030), edge AI ($64.7B by 2032), and privacy regulation tailwinds all converge.
- Deal sizes are attractive. $10K-$50K/site for perimeter detection provides runway-building revenue early, while $499/mo construction safety adds volume.
- Land and expand. A perimeter detection customer (solar farm) may also need construction safety monitoring for new builds. A construction customer may need perimeter security after project completion.
Sequencing: - Weeks 1-2: Validate assumptions for perimeter detection (call solar farm operators, map the buying process). - Weeks 2-10: Build Jetson edge platform + perimeter detection module as first vertical. - Weeks 10-14: Deploy 2-3 paid pilots at solar farm sites. - Weeks 14-20: Based on pilot learnings, begin construction safety module development. - Month 6: Evaluate whether to add manufacturing inspection (COMBO B) as a third vertical.
Secondary path: #7 (Visual Inspection) as a standalone
If the team wants the highest per-unit revenue and is willing to accept longer sales cycles and deployment complexity, manufacturing visual inspection is the single strongest standalone opportunity. Start with ONE specific product type in food manufacturing (e.g., produce quality grading or packaging inspection) and standardize the deployment before expanding.
Sources¶
- Loopio Pricing & Alternatives (Capterra)
- AutoRFP.ai -- Loopio Alternatives
- Proposal Management Software Market (Fortune Business Insights)
- AI in CRE Investment Guide (V7 Labs)
- CRE Underwriting Trends 2026 (Coruzant)
- SuperDial $15M Series A (Fierce Healthcare)
- Prosper AI $5M Seed (Healthcare IT Today)
- Harvey $8B Valuation (TechCrunch)
- Harvey $11B Valuation Talks (TechCrunch)
- Harvey Revenue & Funding (Sacra)
- Gong $300M+ ARR (TechCrunch)
- Gong Pricing Breakdown 2026 (MarketBetter)
- Gong Alternatives (Claap)
- Edge AI Market Forecast (Markets and Data)
- Edge AI Hardware Market (GM Insights)
- AI Defect Detection Market (NaviStrata)
- AI Inspection Systems Guide (Overview.ai)
- Perimeter Intrusion Detection Market (Straits Research)
- Cloudastructure AI Security Deployment (GlobeNewsWire)
- Predictive Maintenance Market (Grand View Research)
- Construction Site Monitoring Market (GlobeNewsWire)
- AI in Construction Safety 2026 (CompScience)
- PPE Detection (Spot AI)
- Voice AI Prior Authorization 2026 (Droidal)