Menu

Research

Our Research Programs

From micro-level individual assessment to macro-level policy analytics — our research constructs an evidence-based feedback loop that transforms behavioral data into public goods.

Architecture

The Data-to-Decision Loop

Phase 1

Quantifying Decision Friction (Micro-Level Data Acquisition)

We hypothesize that the root cause of education-career mismatch lies in the information noise and asymmetry individuals face at critical decision points. Through our proprietary assessment instruments, we capture raw data on cognitive aptitude, intergenerational resource endowments, risk preferences, and intrinsic motivation profiles.

Phase 2

From Individual Profiles to Population-Level Trends

The core analytical engine. Using advanced statistical modeling — including hierarchical linear models, latent class analysis, and machine learning clustering — we identify systematic patterns across large-scale anonymized longitudinal samples.

Phase 3

Evidence-Based Public Goods (Dual-Track Output)

Research outputs channeled into two streams: open-access AI-driven assessment tools for individuals, and annual policy white papers for universities, government agencies, and public discourse.

Research Program I

Psychometric Assessment & Education Measurement

The HTCS framework is our foundational research engine, built on the science of psychometrics to transcend single-dimensional academic metrics.

→ For a detailed breakdown of the five HTCS dimensions, see the HTCS Framework page.

Research Methodology — Assessment Development Pipeline

1

Construct Definition & Item Development

Literature review and expert panel consultation. Item pool generation using cognitive interviewing and think-aloud protocols.

2

Psychometric Calibration

IRT modeling (2PL, Graded Response Models), Differential Item Functioning (DIF) analysis, and Confirmatory Factor Analysis (CFA).

3

Reliability & Validity Evaluation

Cronbach's α and McDonald's ω, test-retest reliability, convergent and discriminant validity, predictive validity studies.

4

NLP-Augmented Qualitative Analysis

LDA topic modeling and sentiment classification applied to open-ended career narrative responses.

5

Continuous Validation & Model Iteration

Alumni career trajectory retrospective mechanism and longitudinal tracking studies (3, 5, and 10-year follow-ups).

Research Program II

Econometric Policy Analysis

Applying causal inference and spatial econometrics to fill the critical data vacuum in international talent policy.

Research Methodology — Causal Inference & Spatial Analysis Toolkit

1

Quasi-Experimental Policy Evaluation

Difference-in-Differences (DiD), Regression Discontinuity Design (RDD), and Instrumental Variable (IV) approaches.

2

Input-Output Modeling & Fiscal Multiplier Analysis

Regional I-O models (BEA RIMS II framework) to estimate fiscal multiplier effects of high-skilled international talent.

3

GIS-Based Spatial Analysis

Moran's I, LISA, hot-spot analysis to map talent concentration patterns, Geographically Weighted Regression (GWR).

4

Machine Learning for Talent-Industry Matching

Random Forest, Gradient Boosting, k-means clustering, NLP on job posting and curriculum data to quantify skill-gap indices.

5

Longitudinal Panel Data Analysis

Fixed-effects and random-effects panel models, survival analysis, and Structural Equation Modeling (SEM) with longitudinal mediation.

Research Program III

Digital Public Goods & Equity-Centered Technology

Translating our psychometric and econometric research into open-access technology — fulfilling our institutional commitment to educational equity.

1

Heuristic AI Dialogue Engine

Adaptive assessment logic flows integrating HTCS dimension scores with live labor market data for personalized path simulations, with bias auditing and fairness constraints.

2

Decision-as-Data Research Feedback System

With full user consent and anonymization, the assessment platform functions as a behavioral data collection instrument, creating a self-reinforcing research-service loop.

3

Open-Source Knowledge Commons

Release of de-identified datasets, methodological documentation, assessment instrument specifications, analytical code, and standardized decision-support worksheets.

Research Ethics

Data Ethics Framework

Informed Consent

All data collection instruments include clear, multilingual consent disclosures. Participation is voluntary.

De-identification Standards

All research samples undergo strict anonymization compliant with GDPR and CCPA standards.

Differential Privacy

Aggregated research outputs employ differential privacy techniques to prevent reverse-engineering of individual data.

IRB-Equivalent Review

All human subjects research protocols are reviewed by an independent ethics advisory panel prior to data collection.

Quality Assurance

Validity Monitoring

  • Ongoing construct validity audits using multi-trait multi-method (MTMM) matrices.
  • Annual model recalibration based on incoming longitudinal outcome data.
  • External peer review of all white paper publications prior to public release.
  • Transparent reporting of model limitations, confidence intervals, and known biases in all published outputs.