HSC-Bench

Leaderboard

Centralized benchmark results for service recommendation and service composition, with static filters for the first release.

How to read the leaderboard

The first release uses static CSV-backed tables. Results should specify dataset, model type, code availability, official reproduction status, and whether the result follows the unified split or protocol.

Service Recommendation

ModelDatasetTypeKP@5P@10R@5R@10NDCG@5NDCG@10MRRCodePaper
SRLCFHSC+Benchmark Model5 TBDTBDTBDTBDTBDTBDTBD CodePaper
MTFMHSC+Neural10 TBDTBDTBDTBDTBDTBDTBD CodePaper
GSATHSC+Graph-based10 TBDTBDTBDTBDTBDTBDTBD CodePaper
GSL-MashProgrammableWebGraph-based10 TBDTBDTBDTBDTBDTBDTBD CodePaper
FrequencyHSC+Traditional10 TBDTBDTBDTBDTBDTBDTBD CodePaper
LLM-based rerankingHSC+LLM-based10 TBDTBDTBDTBDTBDTBDTBD CodePaper

Service Composition

ModelDatasetTypeUtility ↑RT ↓Cost ↓Throughput ↑Availability ↑Reliability ↑CodePaper
GNNPN-SCHSC+Learning-based TBDTBDTBDTBDTBDTBD CodePaper
SDFGAHSC+Optimization-based TBDTBDTBDTBDTBDTBD CodePaper
DAAGAHSC+Optimization-based TBDTBDTBDTBDTBDTBD CodePaper
GAQWSOptimization-based TBDTBDTBDTBDTBDTBD CodePaper
LLM PlannerHSC+LLM-based TBDTBDTBDTBDTBDTBD CodePaper
Multi-agent pipelineHSC+LLM-based TBDTBDTBDTBDTBDTBD CodePaper

Submission protocol

  1. Use the official dataset version and split.
  2. Run the published evaluation script with fixed random seed and configuration.
  3. Add a row to the corresponding CSV file.
  4. Submit a pull request with paper link, code link, configuration, logs, and hardware information.