moonlight

Methodology

This page is the source of truth for what every number in your report means and how it was computed.

The principle: every analytic is a formula. Every formula has inputs you can audit, percentile bands you can verify, and limitations that are written down. None of it is a black-box score, and none of it tells you what to do with your money.

What this is, in regulatory terms

Moonlight is a research-tool publisher, not an investment adviser. We do not hold ourselves out as advising on the value of securities, the advisability of investing in them, or the composition of any individual’s portfolio. The product is an algorithmic report. The same algorithm runs for every user, with no personalization that could be construed as a tailored recommendation.

Specifically:

These boundaries are deliberate. The product is designed to deliver institutional-grade analytics without crossing into territory that would require registration as an investment adviser under the Advisers Act of 1940 or its state-level equivalents. If your situation calls for personalized advice, talk to a fiduciary; this isn’t that.

Pipeline

The standard window for every time-series analytic is 252 trailing trading days, roughly one calendar year. Backtests, factor regression, rolling Sharpe, rolling β, drawdown, and volatility all share this window so the numbers in your report are mutually consistent.

Holdings come in via screenshot parse. Before any analytic runs, the parsed table is run through a deterministic input-validation layer, and the analytic pipeline itself runs through forty-plus deterministic gates that check arithmetic invariants, contract conformance, and freshness at every stage. You confirm the parsed table on screen before the pipeline proceeds; if a ticker was misread, you catch it there, not later when the report is wrong.

The exact set of validation rules is part of the product. We don’t publish the full list. They are the accumulated edge cases of every real portfolio screenshot we’ve handled, and they are what keeps the pipeline from emitting silently-wrong analytics on unusual inputs. What we publish is the formulas above and the result of the gates: a Quality Check section in every report that names how many gates passed, how many fields were attested, and how many modules ran clean. The full machine-readable provenance trace ships with every report as a provenance.json sidecar, with each numeric field carrying its formula, a hash of its inputs, and a timestamp.

Scorecards

The four cards on the executive summary (Beta, Concentration, Quality, Market Fit) are not a composite score. Each is a single deterministic formula, scored against the S&P 500 cross-section, with the bands published below. There is no rebalancing of the bands based on what makes a portfolio “look better”; the same cutoffs apply to every report.

If you disagree with a band, say you think a Beta of 1.4 should be “balanced” rather than “offensive” because the rest of your portfolio is short equity index, you’re probably right for your own context. The card is a benchmark, not a verdict.

Beta

We compute portfolio Beta as a holdings-weighted CAPM Beta against SPY, estimated by ordinary least squares regression on 252 trailing daily returns. Reported with the 90% confidence interval and the regression R².

Bands.

Worked example. For a portfolio of 60% AAPL and 40% MSFT, where AAPL’s β is 1.20 and MSFT’s β is 0.95, the holdings-weighted β is 0.6 × 1.20 + 0.4 × 0.95 = 1.10. We then re-estimate β directly on the daily portfolio return series, which is not the same number unless the holdings’ β regressions all share residual structure, and report that as the headline number, with the weighted-average shown alongside as a sanity check.

Limitations. β is unstable when the regression R² is below about 0.6. We display R² so you can see this. β also doesn’t capture jump risk, illiquid positions, or anything off-benchmark.

Concentration

The Herfindahl-Hirschman Index (HHI) of position weights, scaled 0 to 100. HHI = Σ wᵢ² × 100 where wᵢ is each position’s weight as a fraction of the portfolio. We also report the top-K share (largest 1, 3, 5 positions), the effective number of holdings (1 / HHI in fractional terms), and the Gini coefficient of weights.

Bands.

Why HHI rather than just “your top 3 positions are X% of the portfolio”. HHI is the standard antitrust metric. It correctly weights large positions more than proportionally, which matches how single-name risk actually compounds in a real portfolio. A 30% / 30% / 30% / 10% portfolio and a 50% / 30% / 10% / 10% portfolio have the same top-3 share (90%) but very different concentration risk, and HHI catches that.

Limitations. HHI doesn’t see sector overlap. A portfolio of NVDA / AMD / TSM at 33% each looks “diversified” by HHI but is essentially one bet on semis. Read the sector breakdown alongside the HHI number, not instead of it.

Quality

Per-name quality is a blend of three accounting metrics, equally weighted:

  1. Trailing 5-year median ROIC (return on invested capital). Measures whether the company actually earns its cost of capital.
  2. Net debt / EBITDA. Leverage. Lower is better, capped at 0 for net-cash names.
  3. Free cash flow conversion. FCF as a percentage of net income. Measures whether reported earnings turn into actual cash.

Each name is scored 0 to 100 against the S&P 500 cross-section on each metric, the three are averaged, and the portfolio score is the market-cap-weighted average across your holdings.

Limitations. Quality is a slow signal. It tells you whether you own businesses that compound, not whether the next quarter prints well. It also breaks for ETFs, financials (where Net debt / EBITDA isn’t meaningful), and any name younger than 5 years (insufficient ROIC history).

Market Fit

We compute the 60-day rolling Pearson correlation of your portfolio’s daily return series against three benchmark return series: SPY (US large-cap), IWM (US small-cap), and EFA (developed international). The dominant correlation tells you which of those three indexes your portfolio actually behaves like, regardless of what the holdings table says.

We do not report a single number; we report all three correlations and let you read the shape. A portfolio that correlates 0.95 to SPY and 0.40 to EFA is not the same as one that correlates 0.70 to both, even if a single “diversification score” would put them at the same point.

Style Box

The standard 3 × 3 grid: large / mid / small (rows) by value / blend / growth (columns). Position counts are weighted by portfolio dollars, not by share count. Each holding is mapped by Russell breakpoints (large is the top 70% of total US market cap, small is the next 20%, mid fills the middle) and a value-versus-growth score that blends P/B and forward P/E against sector peers.

Limitations. Style Box is a snapshot, not a trajectory. A “growth” stock with a P/E of 35 today was a “value” stock with a P/E of 12 a year ago and may be again. The chart is useful for diagnosing concentration in a single corner, for instance a portfolio that is 80% large-cap growth, not for forecasting.

Factor regression

We regress your portfolio’s daily excess returns (returns minus the daily risk-free rate) against the six Fama-French factors. The regression spans 252 trailing trading days. We report each coefficient with its t-statistic, the regression’s R², and the residual α (any return not explained by the factors).

The five style factors are SMB (size), HML (value), RMW (profitability), CMA (investment), and MOM (momentum). They describe systematic exposures that academic research has tied to long-run return premia. Mkt-RF is the broad equity market premium; the rest tell you which slices of the market premium your portfolio is over- or under-weighting.

The 1-year backtest is hypothetical. It holds your current weights flat retroactively over the trailing 252 days. This means it is subject to look-ahead bias (we know which names ended the year up) and survivorship bias (we don’t model rebalancing, taxes, or what you would have actually done). Read the backtest as descriptive (“this is what your current weights would have done last year if you’d held them”), not as prescriptive.

Risk panel (Standard / Pro)

Five distinct risk metrics, computed on the same 252-day window:

Why several risk metrics rather than one. Single risk numbers are misleading. A portfolio with a great Sharpe ratio can have a terrible Calmar (if drawdowns clustered into a single year) and vice versa. We display all five so the shape of the risk shows up.

Brinson attribution (Pro)

For Pro reports, we decompose the portfolio’s active return (its return relative to the SPY benchmark) into three Brinson components:

These three sum to the total active return. The decomposition tells you whether you “won” by picking the right sector or by picking the right stock within a sector. Different skills, different signals.

News desk

For Standard and Pro tiers, we run a 24-hour rolling search for each ticker, deduplicate, rank by relevance, and surface the top hits with source domain and timestamp. We do not summarize the article. We do not editorialize. Source links are clickable so you can read the original.

There is no “sentiment score.” Sentiment classification on financial news is a research problem we’d rather not pretend to solve in twenty lines of prompt; instead, you read the headline.