Prompt Compressor

Compress your prompt by up to 90% of tokens

Note: Compression may impact LLM accuracy.

Enter Your Prompt

0 characters
0 tokens

Compressed Prompt Output

@misc{li2023compressing,
title={Compressing Context to Enhance Inference Efficiency of Large Language Models},
author={Yucheng Li and Bo Dong and Chenghua Lin and Frank Guerin},
year={2023},
eprint={2310.06201},
archivePrefix={arXiv},
primaryClass={cs.CL}