Research
Our Research Programs
From micro-level individual assessment to macro-level policy analytics — our research constructs an evidence-based feedback loop that transforms behavioral data into public goods.
Architecture
The Data-to-Decision Loop
Quantifying Decision Friction (Micro-Level Data Acquisition)
We hypothesize that the root cause of education-career mismatch lies in the information noise and asymmetry individuals face at critical decision points. Through our proprietary assessment instruments, we capture raw data on cognitive aptitude, intergenerational resource endowments, risk preferences, and intrinsic motivation profiles.
From Individual Profiles to Population-Level Trends
The core analytical engine. Using advanced statistical modeling — including hierarchical linear models, latent class analysis, and machine learning clustering — we identify systematic patterns across large-scale anonymized longitudinal samples.
Evidence-Based Public Goods (Dual-Track Output)
Research outputs channeled into two streams: open-access AI-driven assessment tools for individuals, and annual policy white papers for universities, government agencies, and public discourse.
Psychometric Assessment & Education Measurement
The HTCS framework is our foundational research engine, built on the science of psychometrics to transcend single-dimensional academic metrics.
Research Methodology — Assessment Development Pipeline
Construct Definition & Item Development
Literature review and expert panel consultation. Item pool generation using cognitive interviewing and think-aloud protocols.
Psychometric Calibration
IRT modeling (2PL, Graded Response Models), Differential Item Functioning (DIF) analysis, and Confirmatory Factor Analysis (CFA).
Reliability & Validity Evaluation
Cronbach's α and McDonald's ω, test-retest reliability, convergent and discriminant validity, predictive validity studies.
NLP-Augmented Qualitative Analysis
LDA topic modeling and sentiment classification applied to open-ended career narrative responses.
Continuous Validation & Model Iteration
Alumni career trajectory retrospective mechanism and longitudinal tracking studies (3, 5, and 10-year follow-ups).
Econometric Policy Analysis
Applying causal inference and spatial econometrics to fill the critical data vacuum in international talent policy.
Research Methodology — Causal Inference & Spatial Analysis Toolkit
Quasi-Experimental Policy Evaluation
Difference-in-Differences (DiD), Regression Discontinuity Design (RDD), and Instrumental Variable (IV) approaches.
Input-Output Modeling & Fiscal Multiplier Analysis
Regional I-O models (BEA RIMS II framework) to estimate fiscal multiplier effects of high-skilled international talent.
GIS-Based Spatial Analysis
Moran's I, LISA, hot-spot analysis to map talent concentration patterns, Geographically Weighted Regression (GWR).
Machine Learning for Talent-Industry Matching
Random Forest, Gradient Boosting, k-means clustering, NLP on job posting and curriculum data to quantify skill-gap indices.
Longitudinal Panel Data Analysis
Fixed-effects and random-effects panel models, survival analysis, and Structural Equation Modeling (SEM) with longitudinal mediation.
Digital Public Goods & Equity-Centered Technology
Translating our psychometric and econometric research into open-access technology — fulfilling our institutional commitment to educational equity.
Heuristic AI Dialogue Engine
Adaptive assessment logic flows integrating HTCS dimension scores with live labor market data for personalized path simulations, with bias auditing and fairness constraints.
Decision-as-Data Research Feedback System
With full user consent and anonymization, the assessment platform functions as a behavioral data collection instrument, creating a self-reinforcing research-service loop.
Open-Source Knowledge Commons
Release of de-identified datasets, methodological documentation, assessment instrument specifications, analytical code, and standardized decision-support worksheets.
Research Ethics
Data Ethics Framework
Informed Consent
All data collection instruments include clear, multilingual consent disclosures. Participation is voluntary.
De-identification Standards
All research samples undergo strict anonymization compliant with GDPR and CCPA standards.
Differential Privacy
Aggregated research outputs employ differential privacy techniques to prevent reverse-engineering of individual data.
IRB-Equivalent Review
All human subjects research protocols are reviewed by an independent ethics advisory panel prior to data collection.
Quality Assurance
Validity Monitoring
- •Ongoing construct validity audits using multi-trait multi-method (MTMM) matrices.
- •Annual model recalibration based on incoming longitudinal outcome data.
- •External peer review of all white paper publications prior to public release.
- •Transparent reporting of model limitations, confidence intervals, and known biases in all published outputs.