Logs

Recent Search Logs

Query: deception

Timestamp: 2025-03-30T15:06:20.072915

Results:
[{"title": "Challenges and Applications of Large Language Models", "url": "https://arxiv.org/abs/2307.10169"}, {"title": "Is Power-Seeking AI an Existential Risk?", "url": "https://arxiv.org/abs/2206.13353"}, {"title": "Unsolved Problems in ML Safety", "url": "https://arxiv.org/abs/2109.13916"}]

Query: transformers

Timestamp: 2025-03-18T23:08:55.043387

Results:
[{"title": "Transformers Are Rnns: Fast Autoregressive Transformers With Linear Attention Angelos Katharopoulos 1 2 **Apoorv Vyas** 1 2 Nikolaos Pappas 3 **Franc\u00b8Ois Fleuret** 2 4 *", "url": "https://arxiv.org/abs/2006.16236"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}, {"title": "Let\u2019s Think Dot by Dot:Hidden Computation in Transformer Language Models", "url": "https://arxiv.org/abs/2404.15758"}]

Query: code embeddings

Timestamp: 2025-03-17T17:37:33.078647

Results:
[{"title": "CodeSearchNetChallengeEvaluating the State of Semantic Code Search", "url": "https://arxiv.org/abs/1909.09436"}, {"title": "Repetition Improves Language Model Embeddings", "url": "https://arxiv.org/abs/2402.15449"}, {"title": "WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with Refined Data Generation.", "url": "https://arxiv.org/abs/2312.14187"}]

Query: Kdo to byl Bernard Bolzano?

Timestamp: 2025-03-12T14:25:44.401737

Results:
[{"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}, {"title": "InstructPix2Pix: Learning to Follow Image Editing Instructions", "url": "https://arxiv.org/abs/2211.09800"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}]

Query: megatron

Timestamp: 2025-03-09T15:43:33.399296

Results:
[{"title": "Dissecting The Runtime Performance Of The Training, Fine-Tuning, And Inference Of Large Language Models Flashattention Memory Optimization Compute Opt. Opt. Tech.Llama2 Model Quantization", "url": "https://arxiv.org/abs/2311.03687"}, {"title": "Reducing Activation Recomputationin Large Transformer Models", "url": "https://arxiv.org/abs/2205.05198"}, {"title": "LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers", "url": "https://arxiv.org/abs/2310.03294"}]

Query: who is jeeremy howard

Timestamp: 2025-01-03T15:49:14.322395

Results:
[{"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}, {"title": "Parallelizing Linear Transformers With The Delta Rule Over Sequence Length", "url": "https://arxiv.org/abs/2406.06484"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: Machine learning

Timestamp: 2025-01-03T11:57:35.415456

Results:
[{"title": "Toward Trustworthy AI Development:Mechanisms for Supporting Verifiable Claims111Listed authors are those who contributed substantive ideas and/or work to this report. Contributions include writing, research, and/or review for one or more sections; some authors also contributed content via participation in an April 2019 workshop and/or via ongoing discussions. As such, with the exception of the primary/corresponding authors, inclusion as author does not imply endorsement of all aspects of the report.", "url": "https://arxiv.org/abs/2004.07213"}, {"title": "On The Opportunities And Risks Of Foundation Models", "url": "https://arxiv.org/abs/2108.07258"}, {"title": "1 Mindstorms In Natural Language-Based Societies Of Mind", "url": "https://arxiv.org/abs/2305.17066"}]

Query: What's your most favorite line of code?

Timestamp: 2025-01-02T15:14:54.581979

Results:
[{"title": "Camel: Communicative Agents For \"Mind\" Exploration Of Large Language Model Society Https://Www.Camel-Ai.Org Guohao Li\u2217 Hasan Abed Al Kader Hammoud* Hani Itani* **Dmitrii Khizbullin**", "url": "https://arxiv.org/abs/2303.17760"}, {"title": "Alpagasus: Training A Better Alpaca With Fewer Data", "url": "https://arxiv.org/abs/2307.08701"}, {"title": "The Flan Collection: Designing Data And Methods For Effective Instruction Tuning", "url": "https://arxiv.org/abs/2301.13688"}]

Query: rajamannar

Timestamp: 2025-01-02T01:47:05.324964

Results:
[{"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Sparse High Rank Adapters", "url": "https://arxiv.org/abs/2406.13175"}]

Query: poopy 💩

Timestamp: 2024-12-22T14:37:12.940363

Results:
[{"title": "WebGPT: Browser-assisted question-answering with human feedback", "url": "https://arxiv.org/abs/2112.09332"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: What can you do?

Timestamp: 2024-12-21T13:18:20.477333

Results:
[{"title": "LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models", "url": "https://arxiv.org/abs/2306.12420"}, {"title": "QLoRA: Efficient Finetuning of Quantized LLMs", "url": "https://arxiv.org/abs/2305.14314"}, {"title": "DERA: Enhancing Large Language Model Completionswith Dialog-Enabled Resolving Agents", "url": "https://arxiv.org/abs/2303.17071"}]

Query: Papers about college admissions

Timestamp: 2024-12-20T23:22:03.658553

Results:
[{"title": "reStructured Pre-training", "url": "https://arxiv.org/abs/2206.11147"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}, {"title": "COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning", "url": "https://arxiv.org/abs/2403.18058"}]

Query: South park francais

Timestamp: 2024-12-20T22:32:07.312832

Results:
[{"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}]

Query: RAG

Timestamp: 2024-12-20T18:25:57.612184

Results:
[{"title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", "url": "https://arxiv.org/abs/2005.11401"}, {"title": "RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture", "url": "https://arxiv.org/abs/2401.08406"}, {"title": "Leave No Document Behind: Benchmarking Long-Context Llms With Extended Multi-Doc Qa", "url": "https://arxiv.org/abs/2406.17419"}]

Query: where am i

Timestamp: 2024-12-20T18:25:50.241667

Results:
[{"title": "Sparks Of Artificial General Intelligence: Early Experiments With Gpt-4", "url": "https://arxiv.org/abs/2303.12712"}, {"title": "WebGPT: Browser-assisted question-answering with human feedback", "url": "https://arxiv.org/abs/2112.09332"}, {"title": "Stream of Search (SoS): Learning to Search in Language", "url": "https://arxiv.org/abs/2404.03683"}]

Query: show me what you can do

Timestamp: 2024-12-20T18:25:33.210971

Results:
[{"title": "Chameleon: Mixed-Modal Early-Fusion Foundation Models", "url": "https://arxiv.org/abs/2405.09818"}, {"title": "WebGPT: Browser-assisted question-answering with human feedback", "url": "https://arxiv.org/abs/2112.09332"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}]

Query: what?

Timestamp: 2024-12-20T17:44:36.130480

Results:
[{"title": "LaMDA: Language Models for Dialog Applications", "url": "https://arxiv.org/abs/2201.08239"}, {"title": "WebGPT: Browser-assisted question-answering with human feedback", "url": "https://arxiv.org/abs/2112.09332"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}]

Query: battery thermal management

Timestamp: 2024-12-20T17:10:15.326619

Results:
[{"title": "DGEMM on Integer Matrix Multiplication Unit", "url": "https://arxiv.org/abs/2306.11975"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: battery thermal

Timestamp: 2024-12-20T17:10:07.153947

Results:
[{"title": "CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation", "url": "https://arxiv.org/abs/2311.18775"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: what is the meaning of life?

Timestamp: 2024-12-20T13:41:09.158292

Results:
[{"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: Hey there

Timestamp: 2024-12-20T09:49:36.268403

Results:
[{"title": "Generative Agents: Interactive Simulacra of Human Behavior", "url": "https://arxiv.org/abs/2304.03442"}, {"title": "A Categorical Archive of ChatGPT Failures", "url": "https://arxiv.org/abs/2302.03494"}, {"title": "Thinking Like Transformers", "url": "https://arxiv.org/abs/2106.06981"}]

Query: How to use Bert mod

Timestamp: 2024-12-20T05:22:26.918259

Results:
[{"title": "A Primer in BERTology: What We Know About How BERT Works", "url": "https://arxiv.org/abs/2002.12327"}, {"title": "Tensor Programs V: Tuning Large Neural Networks Via Zero-Shot Hyperparameter Transfer", "url": "https://arxiv.org/abs/2203.03466"}]

Query: everything

Timestamp: 2024-12-19T20:20:38.489930

Results:
[{"title": "Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation", "url": "https://arxiv.org/abs/2311.04254"}, {"title": "Artifcial Intelligence, Values, And Alignment", "url": "https://arxiv.org/abs/2001.09768"}, {"title": "A Pretrainer'S Guide To Training Data: Measuring The Effects Of Data Age, Domain Coverage, Quality, & Toxicity", "url": "https://arxiv.org/abs/2305.13169"}]

Query: Help

Timestamp: 2024-10-13T13:31:14.713671

Results:
[{"title": "Robots That Ask For Help: Uncertainty Alignmentfor Large Language Model Planners", "url": "https://arxiv.org/abs/2307.01928"}, {"title": "Agentbench: Evaluating Llms As Agents", "url": "https://arxiv.org/abs/2308.03688"}, {"title": "Unsolved Problems in ML Safety", "url": "https://arxiv.org/abs/2109.13916"}]

Query: Help

Timestamp: 2024-10-10T16:00:29.045130

Results:
[{"title": "Robots That Ask For Help: Uncertainty Alignmentfor Large Language Model Planners", "url": "https://arxiv.org/abs/2307.01928"}, {"title": "Agentbench: Evaluating Llms As Agents", "url": "https://arxiv.org/abs/2308.03688"}, {"title": "Unsolved Problems in ML Safety", "url": "https://arxiv.org/abs/2109.13916"}]

Query: attention

Timestamp: 2024-10-08T11:50:10.467676

Results:
[{"title": "Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models", "url": "https://arxiv.org/abs/2401.04658"}, {"title": "System 2 Attention(is something you might need too)", "url": "https://arxiv.org/abs/2311.11829"}, {"title": "How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers", "url": "https://arxiv.org/abs/2211.03495"}]

Query: fastai

Timestamp: 2024-10-08T07:01:15.199717

Results:
[{"title": "Fine-Tuning Can Distort Pretrained Features And Underperform Out-Of-Distribution Ananya Kumar Aditi Raghunathan Robbie Jones Tengyu Ma Percy Liang", "url": "https://arxiv.org/abs/2202.10054"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Punica: Multi-Tenant Lora S**Erving** Lequn Chen * 1 **Zihao Ye** * 1 Yongji Wu 2 Danyang Zhuo 2 Luis Ceze 1 **Arvind Krishnamurthy** 1", "url": "https://arxiv.org/abs/2310.18547"}]

Query: flow matching

Timestamp: 2024-10-08T07:00:31.667951

Results:
[{"title": "Improving And Generalizing Flow-Based Generative Models With Minibatch Optimal Transport", "url": "https://arxiv.org/abs/2302.00482"}]

Query: text classification datasets

Timestamp: 2024-09-18T05:58:10.173884

Results:
[{"title": "Holistic Evaluation of Language Models", "url": "https://arxiv.org/abs/2211.09110"}, {"title": "Universal Language Model Fine-tuning for Text Classification", "url": "https://arxiv.org/abs/1801.06146"}, {"title": "UL2: Unifying Language Learning Paradigms", "url": "https://arxiv.org/abs/2205.05131"}]

Query: how to make llm plastic

Timestamp: 2024-09-18T05:57:02.697644

Results:
[{"title": "Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks", "url": "https://arxiv.org/abs/2302.08399"}, {"title": "Evolution through Large Models", "url": "https://arxiv.org/abs/2206.08896"}, {"title": "Robots That Ask For Help: Uncertainty Alignmentfor Large Language Model Planners", "url": "https://arxiv.org/abs/2307.01928"}]

Query: how to make llm plastic

Timestamp: 2024-09-18T05:56:56.287167

Results:
[{"title": "Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks", "url": "https://arxiv.org/abs/2302.08399"}, {"title": "Evolution through Large Models", "url": "https://arxiv.org/abs/2206.08896"}, {"title": "Robots That Ask For Help: Uncertainty Alignmentfor Large Language Model Planners", "url": "https://arxiv.org/abs/2307.01928"}]

Query: LLM as ranker

Timestamp: 2024-09-18T05:56:14.294136

Results:
[{"title": "LLM-Blender: Ensembling Large Language Modelswith Pairwise Ranking and Generative Fusion", "url": "https://arxiv.org/abs/2306.02561"}, {"title": "Mixture-Of-Agents Enhances Large Language Model Capabilities", "url": "https://arxiv.org/abs/2406.04692"}, {"title": "Annotating Data for Fine-Tuning a Neural Ranker? Current Active Learning Strategies are not Better than Random Selection", "url": "https://arxiv.org/abs/2309.06131"}]

Query: LLM

Timestamp: 2024-09-17T08:43:38.163897

Results:
[{"title": "Better Call GPT, Comparing Large Language Models Against Lawyers", "url": "https://arxiv.org/abs/2401.16212"}, {"title": "Llm+P: Empowering Large Language Models With Optimal Planning Proficiency", "url": "https://arxiv.org/abs/2304.11477"}, {"title": "FrugalGPT: How to Use Large Language ModelsWhile Reducing Cost and Improving Performance", "url": "https://arxiv.org/abs/2305.05176"}]

Query: what is occupational therapy?

Timestamp: 2024-09-09T07:21:43.053994

Results:
[{"title": "The Capacity for Moral Self-Correction in Large Language Models", "url": "https://arxiv.org/abs/2302.07459"}, {"title": "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models", "url": "https://arxiv.org/abs/2303.10130"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}]

Query: what is occupational therapy?

Timestamp: 2024-09-09T06:38:05.221950

Results:
[{"title": "The Capacity for Moral Self-Correction in Large Language Models", "url": "https://arxiv.org/abs/2302.07459"}, {"title": "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models", "url": "https://arxiv.org/abs/2303.10130"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}]

Query: llm with memory

Timestamp: 2024-08-28T09:39:53.713952

Results:
[{"title": "LLMin a flash:Efficient Large Language Model Inference with Limited Memory", "url": "https://arxiv.org/abs/2312.11514"}, {"title": "Augmenting Language Models withLong-Term Memory", "url": "https://arxiv.org/abs/2306.07174"}, {"title": "Efficient Memory Management for Large Language Model Serving withPagedAttention", "url": "https://arxiv.org/abs/2309.06180"}]

Query: flash attention

Timestamp: 2024-08-21T16:00:38.191555

Results:
[{"title": "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness", "url": "https://arxiv.org/abs/2205.14135"}, {"title": "Faster Causal Attention Over Large Sequences Through Sparse Flash Attention", "url": "https://arxiv.org/abs/2306.01160"}, {"title": "LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers", "url": "https://arxiv.org/abs/2310.03294"}]

Query: Knowledge editing in LLM

Timestamp: 2024-08-21T15:51:50.258707

Results:
[{"title": "MQuAKE: Assessing Knowledge Editing inLanguage Models via Multi-Hop Questions", "url": "https://arxiv.org/abs/2305.14795"}, {"title": "Automatically Correcting Large Language Models:Surveying the landscape of diverse self-correction strategies", "url": "https://arxiv.org/abs/2308.03188"}, {"title": "Tool Documentation Enables Zero-ShotTool-Usage with Large Language Models", "url": "https://arxiv.org/abs/2308.00675"}]

Query: hello

Timestamp: 2024-08-20T15:14:31.658188

Results:
[{"title": "skill-mix: a flexible and expandable family of evaluations for ai models", "url": "https://arxiv.org/abs/2310.17567"}, {"title": "Encouraging Divergent Thinking in Large Language Modelsthrough Multi-Agent Debate", "url": "https://arxiv.org/abs/2305.19118"}, {"title": "Thinking Like Transformers", "url": "https://arxiv.org/abs/2106.06981"}]

Query: what are you capable off

Timestamp: 2024-08-20T11:53:23.407348

Results:
[{"title": "The Alignment Problem from a Deep Learning Perspective", "url": "https://arxiv.org/abs/2209.00626"}, {"title": "Foundation Models And Fair Use", "url": "https://arxiv.org/abs/2303.15715"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}]

Query: reinforcement learning

Timestamp: 2024-08-18T18:53:05.051409

Results:
[{"title": "RLTF: Reinforcement Learning from Unit Test Feedback", "url": "https://arxiv.org/abs/2307.04349"}, {"title": "Neural Optimizer Search with Reinforcement Learning", "url": "https://arxiv.org/abs/1709.07417"}, {"title": "Deep Reinforcement Learningfrom Human Preferences", "url": "https://arxiv.org/abs/1706.03741"}]

Query: test

Timestamp: 2024-08-16T11:22:42.763778

Results:
[{"title": "Measuring Massive MultitaskLanguage Understanding", "url": "https://arxiv.org/abs/2009.03300"}, {"title": "Evaluating Interpolation and Extrapolation Performance of Neural Retrieval Models", "url": "https://arxiv.org/abs/2204.11447"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}]

Query: Abstraction and Reasoning Corpus

Timestamp: 2024-08-15T11:10:30.414083

Results:
[{"title": "Connecting The Dots: Evaluating Abstract Reasoning Capabilities Of Llms Using The New York Times Connections Word Game Prisha Samadarshi2 Mariam Mustafa2 Anushka Kulkarni2 **Raven Rothkopf**2", "url": "https://arxiv.org/abs/2406.11012"}, {"title": "Comparing Humans, Gpt-4, And Gpt-4V On Abstraction And Reasoning Tasks", "url": "https://arxiv.org/abs/2311.09247"}, {"title": "Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models", "url": "https://arxiv.org/abs/2310.06117"}]

Query: test

Timestamp: 2024-08-15T11:10:11.201385

Results:
[{"title": "Measuring Massive MultitaskLanguage Understanding", "url": "https://arxiv.org/abs/2009.03300"}, {"title": "Evaluating Interpolation and Extrapolation Performance of Neural Retrieval Models", "url": "https://arxiv.org/abs/2204.11447"}, {"title": "On the Measure of Intelligence", "url": "https://arxiv.org/abs/1911.01547"}]

Query: 你是谁

Timestamp: 2024-08-14T12:28:59.594207

Results:
[{"title": "WebGPT: Browser-assisted question-answering with human feedback", "url": "https://arxiv.org/abs/2112.09332"}, {"title": "Transformers Need Glasses! | Information Over-Squashing In Language Tasks", "url": "https://arxiv.org/abs/2406.04267"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]

Query: can you help with fasthtml?

Timestamp: 2024-08-14T04:32:36.405201

Results:
[{"title": "LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models", "url": "https://arxiv.org/abs/2306.12420"}, {"title": "QuALITY: Question Answering with Long Input Texts, Yes!", "url": "https://arxiv.org/abs/2112.08608"}, {"title": "The Flan Collection: Designing Data And Methods For Effective Instruction Tuning", "url": "https://arxiv.org/abs/2301.13688"}]

Query: Hi

Timestamp: 2024-08-13T21:53:10.547962

Results:
[{"title": "A Watermark For Large Language Models John Kirchenbauer * Jonas Geiping * **Yuxin Wen Jonathan Katz Ian Miers Tom Goldstein** University Of Maryland", "url": "https://arxiv.org/abs/2301.10226"}, {"title": "Thinking Like Transformers", "url": "https://arxiv.org/abs/2106.06981"}, {"title": "Data Curation Via Joint Example Selection Further Accelerates Multimodal Learning", "url": "https://arxiv.org/abs/2406.17711"}]

Query: How are u

Timestamp: 2024-08-13T18:09:07.680129

Results:
[{"title": "U-Mamba: Enhancing Long-rangeDependency for\nBiomedical Image Segmentation", "url": "https://arxiv.org/abs/2401.04722"}, {"title": "WebGPT: Browser-assisted question-answering with human feedback", "url": "https://arxiv.org/abs/2112.09332"}, {"title": "Transcending Scaling Laws with 0.1% Extra Compute", "url": "https://arxiv.org/abs/2210.11399"}]

Query: audio llm

Timestamp: 2024-08-13T15:56:28.087131

Results:
[{"title": "ChatMusician: Understanding and Generating MusicIntrinsically with LLM", "url": "https://arxiv.org/abs/2402.16153"}, {"title": "CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation", "url": "https://arxiv.org/abs/2311.18775"}, {"title": "HEAR: Holistic Evaluation of Audio Representations", "url": "https://arxiv.org/abs/2203.03022"}]

Query: voice activity projection

Timestamp: 2024-08-13T15:56:16.188729

Results:
[{"title": "On the Horizon: Interactive and Compositional Deepfakes", "url": "https://arxiv.org/abs/2209.01714"}, {"title": "The emergence of number and syntax units in LSTM language models", "url": "https://arxiv.org/abs/1903.07435"}, {"title": "Scaling Data-Constrained Language Models", "url": "https://arxiv.org/abs/2305.16264"}]
Back to Search