localLLM: Running Local LLMs with 'llama.cpp' Backend

The 'localLLM' package provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runtime rather than bundled with the package. Package features include text generation, reproducible generation, and parallel inference.

Version: 1.0.1
Depends: R (≥ 3.6.0)
Imports: Rcpp (≥ 1.0.14), tools, utils
Suggests: testthat (≥ 3.0.0), covr
Published: 2025-10-15
DOI: 10.32614/CRAN.package.localLLM (may not be active yet)
Author: Eddie Yang ORCID iD [aut], Yaosheng Xu ORCID iD [aut, cre]
Maintainer: Yaosheng Xu <xu2009 at purdue.edu>
BugReports: https://github.com/EddieYang211/localLLM/issues
License: MIT + file LICENSE
URL: https://github.com/EddieYang211/localLLM
NeedsCompilation: yes
SystemRequirements: C++17, libcurl (optional, for model downloading)
CRAN checks: localLLM results

Documentation:

Reference manual: localLLM.html , localLLM.pdf

Downloads:

Package source: localLLM_1.0.1.tar.gz
Windows binaries: r-devel: not available, r-release: not available, r-oldrel: not available
macOS binaries: r-release (arm64): not available, r-oldrel (arm64): not available, r-release (x86_64): localLLM_1.0.1.tgz, r-oldrel (x86_64): localLLM_1.0.1.tgz

Linking:

Please use the canonical form https://CRAN.R-project.org/package=localLLM to link to this page.