Logs

Recent Search Logs

Query: text classification datasets

Timestamp: 2024-09-18T05:58:10.173884

Results:
[{"title": "Holistic Evaluation of Language Models", "url": "https://arxiv.org/abs/2211.09110"}, {"title": "Universal Language Model Fine-tuning for Text Classification", "url": "https://arxiv.org/abs/1801.06146"}, {"title": "UL2: Unifying Language Learning Paradigms", "url": "https://arxiv.org/abs/2205.05131"}]

Query: how to make llm plastic

Timestamp: 2024-09-18T05:57:02.697644

Results:
[{"title": "Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks", "url": "https://arxiv.org/abs/2302.08399"}, {"title": "Evolution through Large Models", "url": "https://arxiv.org/abs/2206.08896"}, {"title": "Robots That Ask For Help: Uncertainty Alignmentfor Large Language Model Planners", "url": "https://arxiv.org/abs/2307.01928"}]

Query: how to make llm plastic

Timestamp: 2024-09-18T05:56:56.287167

Results:
[{"title": "Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks", "url": "https://arxiv.org/abs/2302.08399"}, {"title": "Evolution through Large Models", "url": "https://arxiv.org/abs/2206.08896"}, {"title": "Robots That Ask For Help: Uncertainty Alignmentfor Large Language Model Planners", "url": "https://arxiv.org/abs/2307.01928"}]

Query: LLM as ranker

Timestamp: 2024-09-18T05:56:14.294136

Results:
[{"title": "LLM-Blender: Ensembling Large Language Modelswith Pairwise Ranking and Generative Fusion", "url": "https://arxiv.org/abs/2306.02561"}, {"title": "Mixture-Of-Agents Enhances Large Language Model Capabilities", "url": "https://arxiv.org/abs/2406.04692"}, {"title": "Annotating Data for Fine-Tuning a Neural Ranker? Current Active Learning Strategies are not Better than Random Selection", "url": "https://arxiv.org/abs/2309.06131"}]

Query: LLM

Timestamp: 2024-09-17T08:43:38.163897

Results:
[{"title": "Better Call GPT, Comparing Large Language Models Against Lawyers", "url": "https://arxiv.org/abs/2401.16212"}, {"title": "Llm+P: Empowering Large Language Models With Optimal Planning Proficiency", "url": "https://arxiv.org/abs/2304.11477"}, {"title": "FrugalGPT: How to Use Large Language ModelsWhile Reducing Cost and Improving Performance", "url": "https://arxiv.org/abs/2305.05176"}]

Query: what is occupational therapy?

Timestamp: 2024-09-09T07:21:43.053994

Results:
[{"title": "The Capacity for Moral Self-Correction in Large Language Models", "url": "https://arxiv.org/abs/2302.07459"}, {"title": "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models", "url": "https://arxiv.org/abs/2303.10130"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}]

Query: what is occupational therapy?

Timestamp: 2024-09-09T06:38:05.221950

Results:
[{"title": "The Capacity for Moral Self-Correction in Large Language Models", "url": "https://arxiv.org/abs/2302.07459"}, {"title": "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models", "url": "https://arxiv.org/abs/2303.10130"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}]

Query: llm with memory

Timestamp: 2024-08-28T09:39:53.713952

Results:
[{"title": "LLMin a flash:Efficient Large Language Model Inference with Limited Memory", "url": "https://arxiv.org/abs/2312.11514"}, {"title": "Augmenting Language Models withLong-Term Memory", "url": "https://arxiv.org/abs/2306.07174"}, {"title": "Efficient Memory Management for Large Language Model Serving withPagedAttention", "url": "https://arxiv.org/abs/2309.06180"}]

Query: flash attention

Timestamp: 2024-08-21T16:00:38.191555

Results:
[{"title": "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness", "url": "https://arxiv.org/abs/2205.14135"}, {"title": "Faster Causal Attention Over Large Sequences Through Sparse Flash Attention", "url": "https://arxiv.org/abs/2306.01160"}, {"title": "LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers", "url": "https://arxiv.org/abs/2310.03294"}]

Query: Knowledge editing in LLM

Timestamp: 2024-08-21T15:51:50.258707

Results:
[{"title": "MQuAKE: Assessing Knowledge Editing inLanguage Models via Multi-Hop Questions", "url": "https://arxiv.org/abs/2305.14795"}, {"title": "Automatically Correcting Large Language Models:Surveying the landscape of diverse self-correction strategies", "url": "https://arxiv.org/abs/2308.03188"}, {"title": "Tool Documentation Enables Zero-ShotTool-Usage with Large Language Models", "url": "https://arxiv.org/abs/2308.00675"}]

Query: hello

Timestamp: 2024-08-20T15:14:31.658188

Results:
[{"title": "skill-mix: a flexible and expandable family of evaluations for ai models", "url": "https://arxiv.org/abs/2310.17567"}, {"title": "Encouraging Divergent Thinking in Large Language Modelsthrough Multi-Agent Debate", "url": "https://arxiv.org/abs/2305.19118"}, {"title": "Thinking Like Transformers", "url": "https://arxiv.org/abs/2106.06981"}]

Query: what are you capable off

Timestamp: 2024-08-20T11:53:23.407348

Results:
[{"title": "The Alignment Problem from a Deep Learning Perspective", "url": "https://arxiv.org/abs/2209.00626"}, {"title": "Foundation Models And Fair Use", "url": "https://arxiv.org/abs/2303.15715"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}]

Query: reinforcement learning

Timestamp: 2024-08-18T18:53:05.051409

Results:
[{"title": "RLTF: Reinforcement Learning from Unit Test Feedback", "url": "https://arxiv.org/abs/2307.04349"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Deep Reinforcement Learningfrom Human Preferences", "url": "https://arxiv.org/abs/1706.03741"}]

Query: test

Timestamp: 2024-08-16T11:22:42.763778

Results:
[{"title": "Measuring Massive MultitaskLanguage Understanding", "url": "https://arxiv.org/abs/2009.03300"}, {"title": "Evaluating Interpolation and Extrapolation Performance of Neural Retrieval Models", "url": "https://arxiv.org/abs/2204.11447"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}]

Query: Abstraction and Reasoning Corpus

Timestamp: 2024-08-15T11:10:30.414083

Results:
[{"title": "Connecting The Dots: Evaluating Abstract Reasoning Capabilities Of Llms Using The New York Times Connections Word Game Prisha Samadarshi2 Mariam Mustafa2 Anushka Kulkarni2 **Raven Rothkopf**2", "url": "https://arxiv.org/abs/2406.11012"}, {"title": "Comparing Humans, Gpt-4, And Gpt-4V On Abstraction And Reasoning Tasks", "url": "https://arxiv.org/abs/2311.09247"}, {"title": "Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models", "url": "https://arxiv.org/abs/2310.06117"}]

Query: test

Timestamp: 2024-08-15T11:10:11.201385

Results:
[{"title": "Measuring Massive MultitaskLanguage Understanding", "url": "https://arxiv.org/abs/2009.03300"}, {"title": "Evaluating Interpolation and Extrapolation Performance of Neural Retrieval Models", "url": "https://arxiv.org/abs/2204.11447"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}]

Query: 你是谁

Timestamp: 2024-08-14T12:28:59.594207

Results:
[{"title": "WebGPT: Browser-assisted question-answering with human feedback", "url": "https://arxiv.org/abs/2112.09332"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: can you help with fasthtml?

Timestamp: 2024-08-14T04:32:36.405201

Results:
[{"title": "LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models", "url": "https://arxiv.org/abs/2306.12420"}, {"title": "QuALITY: Question Answering with Long Input Texts, Yes!", "url": "https://arxiv.org/abs/2112.08608"}, {"title": "The Flan Collection: Designing Data And Methods For Effective Instruction Tuning", "url": "https://arxiv.org/abs/2301.13688"}]

Query: Hi

Timestamp: 2024-08-13T21:53:10.547962

Results:
[{"title": "A Watermark For Large Language Models John Kirchenbauer * Jonas Geiping * **Yuxin Wen Jonathan Katz Ian Miers Tom Goldstein** University Of Maryland", "url": "https://arxiv.org/abs/2301.10226"}, {"title": "Thinking Like Transformers", "url": "https://arxiv.org/abs/2106.06981"}, {"title": "Data Curation Via Joint Example Selection Further Accelerates Multimodal Learning", "url": "https://arxiv.org/abs/2406.17711"}]

Query: How are u

Timestamp: 2024-08-13T18:09:07.680129

Results:
[{"title": "U-Mamba: Enhancing Long-rangeDependency for\nBiomedical Image Segmentation", "url": "https://arxiv.org/abs/2401.04722"}, {"title": "WebGPT: Browser-assisted question-answering with human feedback", "url": "https://arxiv.org/abs/2112.09332"}, {"title": "Transcending Scaling Laws with 0.1% Extra Compute", "url": "https://arxiv.org/abs/2210.11399"}]

Query: audio llm

Timestamp: 2024-08-13T15:56:28.087131

Results:
[{"title": "ChatMusician: Understanding and Generating MusicIntrinsically with LLM", "url": "https://arxiv.org/abs/2402.16153"}, {"title": "CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation", "url": "https://arxiv.org/abs/2311.18775"}, {"title": "HEAR: Holistic Evaluation of Audio Representations", "url": "https://arxiv.org/abs/2203.03022"}]

Query: voice activity projection

Timestamp: 2024-08-13T15:56:16.188729

Results:
[{"title": "On the Horizon: Interactive and Compositional Deepfakes", "url": "https://arxiv.org/abs/2209.01714"}, {"title": "The emergence of number and syntax units in LSTM language models", "url": "https://arxiv.org/abs/1903.07435"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: number of days in a leap year

Timestamp: 2024-08-13T15:55:57.997909

Results:
[{"title": "xVal: A Continuous Number Encodingfor Large Language Models", "url": "https://arxiv.org/abs/2310.02989"}, {"title": "Enhancing Zero-Shot Chain-Of-Thought Reasoning In Large Language Models Through Logic Xufeng Zhao, Mengdi Li, Wenhao Lu, Cornelius Weber, Jae Hee Lee, Kun Chu, And Stefan Wermter", "url": "https://arxiv.org/abs/2309.13339"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: retrieval vectors

Timestamp: 2024-08-08T14:30:00.746115

Results:
[{"title": "Efficient Multi-Vector Dense Retrievalwith Bit Vectors", "url": "https://arxiv.org/abs/2404.02805"}, {"title": "Rethinking the Role of Token Retrieval inMulti-Vector Retrieval", "url": "https://arxiv.org/abs/2304.01982"}, {"title": "Dense Text Retrieval based on Pretrained Language Models: A Survey", "url": "https://arxiv.org/abs/2211.14876"}]

Query: lkj

Timestamp: 2024-08-08T14:18:34.991190

Results:
[{"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Punica: Multi-Tenant Lora S**Erving** Lequn Chen * 1 **Zihao Ye** * 1 Yongji Wu 2 Danyang Zhuo 2 Luis Ceze 1 **Arvind Krishnamurthy** 1", "url": "https://arxiv.org/abs/2310.18547"}, {"title": "More Agents Is All You Need", "url": "https://arxiv.org/abs/2402.05120"}]

Query: bing

Timestamp: 2024-08-08T14:18:29.397949

Results:
[{"title": "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset", "url": "https://arxiv.org/abs/1611.09268"}, {"title": "WebGPT: Browser-assisted question-answering with human feedback", "url": "https://arxiv.org/abs/2112.09332"}, {"title": "Query Rewriting for Retrieval-Augmented Large Language Models", "url": "https://arxiv.org/abs/2305.14283"}]

Query: bang pow

Timestamp: 2024-08-08T14:18:24.202476

Results:
[{"title": "Enhancing Zero-Shot Chain-Of-Thought Reasoning In Large Language Models Through Logic Xufeng Zhao, Mengdi Li, Wenhao Lu, Cornelius Weber, Jae Hee Lee, Kun Chu, And Stefan Wermter", "url": "https://arxiv.org/abs/2309.13339"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: web

Timestamp: 2024-08-08T14:16:15.008262

Results:
[{"title": "The RefinedWeb Dataset for Falcon LLM:Outperforming Curated Corpora with Web Data, and Web Data Only", "url": "https://arxiv.org/abs/2306.01116"}, {"title": "Agentbench: Evaluating Llms As Agents", "url": "https://arxiv.org/abs/2308.03688"}, {"title": "Generative Echo Chamber? Effects of LLM-Powered Search Systems on Diverse Information Seeking", "url": "https://arxiv.org/abs/2402.05880"}]

Query: foom

Timestamp: 2024-08-08T14:16:00.607787

Results:
[{"title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", "url": "https://arxiv.org/abs/2309.03852"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: foob

Timestamp: 2024-08-08T14:15:55.147072

Results:
[{"title": "Punica: Multi-Tenant Lora S**Erving** Lequn Chen * 1 **Zihao Ye** * 1 Yongji Wu 2 Danyang Zhuo 2 Luis Ceze 1 **Arvind Krishnamurthy** 1", "url": "https://arxiv.org/abs/2310.18547"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}]

Query: text to speech

Timestamp: 2024-08-08T14:15:29.099724

Results:
[{"title": "reStructured Pre-training", "url": "https://arxiv.org/abs/2206.11147"}, {"title": "Taskmatrix.Ai: Completing Tasks By Connecting Foundation Models With Millions Of Apis", "url": "https://arxiv.org/abs/2303.16434"}, {"title": "Trained on 100 million words and still in shape:BERT meets British National Corpus", "url": "https://arxiv.org/abs/2303.09859"}]

Query: foo bar baz

Timestamp: 2024-08-08T14:15:21.255220

Results:
[{"title": "Simple Synthetic Data Reduces Sycophancy In Large Language Models", "url": "https://arxiv.org/abs/2308.03958"}, {"title": "LMDX: Language Model-based DocumentInformation Extraction And Localization", "url": "https://arxiv.org/abs/2309.10952"}, {"title": "Larger Language Models Do In-Context Learning Differently", "url": "https://arxiv.org/abs/2303.03846"}]

Query: wowee

Timestamp: 2024-08-08T14:15:12.393924

Results:
[{"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}]

Query: bl

Timestamp: 2024-08-08T14:15:05.361990

Results:
[{"title": "BLIP: Bootstrapping Language-Image Pre-training forUnified Vision-Language Understanding and Generation", "url": "https://arxiv.org/abs/2201.12086"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Pipefisher: Efficient Training Of Large Language Models U**Sing** Pipelining And Fisher Information M**Atrices** Kazuki Osawa 1 Shigang Li 2 **Torsten Hoefler** 1", "url": "https://arxiv.org/abs/2211.14133"}]

Query: hello

Timestamp: 2024-08-08T14:15:01.044583

Results:
[{"title": "skill-mix: a flexible and expandable family of evaluations for ai models", "url": "https://arxiv.org/abs/2310.17567"}, {"title": "Encouraging Divergent Thinking in Large Language Modelsthrough Multi-Agent Debate", "url": "https://arxiv.org/abs/2305.19118"}, {"title": "Thinking Like Transformers", "url": "https://arxiv.org/abs/2106.06981"}]

Query: LLM watermarking

Timestamp: 2024-08-08T14:12:54.014698

Results:
[{"title": "On the Reliability of Watermarks for Large Language Models", "url": "https://arxiv.org/abs/2306.04634"}, {"title": "A Watermark For Large Language Models John Kirchenbauer * Jonas Geiping * **Yuxin Wen Jonathan Katz Ian Miers Tom Goldstein** University Of Maryland", "url": "https://arxiv.org/abs/2301.10226"}, {"title": "Can AI-Generated Text be Reliably Detected?", "url": "https://arxiv.org/abs/2303.11156"}]

Query: kv cache compression

Timestamp: 2024-08-08T13:00:43.009191

Results:
[{"title": "Model Tells You What to Discard:Adaptive KV Cache Compression for LLMs", "url": "https://arxiv.org/abs/2310.01801"}, {"title": "PyramidInfer: Pyramid KV Cache Compressionfor High-throughput LLM Inference", "url": "https://arxiv.org/abs/2405.12532"}]

Query: What is the best name for a penis?

Timestamp: 2024-08-08T10:15:26.591771

Results:
[{"title": "WebGPT: Browser-assisted question-answering with human feedback", "url": "https://arxiv.org/abs/2112.09332"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: What do you think about apple pie?

Timestamp: 2024-08-08T10:14:37.140241

Results:
[{"title": "QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models", "url": "https://arxiv.org/abs/2309.14717"}, {"title": "QuALITY: Question Answering with Long Input Texts, Yes!", "url": "https://arxiv.org/abs/2112.08608"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}]

Query: What do you think about noobs

Timestamp: 2024-08-08T10:14:11.126679

Results:
[{"title": "Is Power-Seeking AI an Existential Risk?", "url": "https://arxiv.org/abs/2206.13353"}, {"title": "Co-Writing Screenplays and Theatre Scripts with Language ModelsAn Evaluation by Industry Professionals", "url": "https://arxiv.org/abs/2209.14958"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}]

Query: state space model

Timestamp: 2024-08-08T06:12:11.484415

Results:
[{"title": "Mamba: Linear-Time Sequence Modeling With Selective State Spaces", "url": "https://arxiv.org/abs/2312.00752"}, {"title": "Repeat After Me:Transformers are Better than State Space Models at CopyingTransformers are Better than State Space Models at Copying", "url": "https://arxiv.org/abs/2402.01032"}, {"title": "Transformers Are Ssms: Generalized Models And Efficient Algorithms Through Structured State Space Duality", "url": "https://arxiv.org/abs/2405.21060"}]
Back to Search