Choose a Package

Quick routing between np, npRmpi, and crs based on the job you want to do.
Keywords

np, npRmpi, crs, choose a package, kernel methods, splines

If you are deciding where to begin, the short answer is this: start with the package that matches the job, not with the longest document.

If you want to… Start with Why
Fit kernel regression, density, distribution, or related tests on one machine np This is the core package and the natural starting point for most users.
Run the same style of kernel workflows on larger jobs using MPI npRmpi The package supports a modern session route on macOS/Linux and attach under mpiexec when that is the appropriate entry path.
Work with categorical regression splines and related constrained spline methods crs This package is the right entry point for spline-based work rather than kernel-based work.

A practical rule of thumb

  • Start with np unless you already know that you need MPI.
  • Move to npRmpi when the workflow is the same but the job is too large or too slow in serial.
  • Use crs when spline bases, shape restrictions, or spline-specific methods are the natural fit.

Common routes

I want a simple first success

Go to Install and Get Started, install the package you need, and then use Quickstarts for the smallest runnable scripts before worrying about longer documents.

I already know I want kernel methods

Go to Kernel Methods. That page starts with np and then points you to npRmpi when parallel execution is warranted.

I already know I need MPI

Go directly to MPI and Large Data. The guidance there is organized around current session, attach, and profile usage rather than the older wrapper-heavy style.

I want working scripts

Go first to Quickstarts if you want the shortest path to runnable code. Then use Code Catalog, Worked Examples, or Interactive Demos when you want more context or a broader script library.

First commands to remember

install.packages("np")
install.packages("crs")

For npRmpi, installation depends on your MPI setup and operating system, so it is treated separately on MPI and Large Data.

Back to top