* using log directory ‘/data/gannet/ripley/R/packages/tests-clang/modelbased.Rcheck’
* using R Under development (unstable) (2025-03-01 r87860)
* using platform: x86_64-pc-linux-gnu
* R was compiled by
    clang version 20.1.0-rc3
    flang version 20.1.0-rc3
* running under: Fedora Linux 40 (Workstation Edition)
* using session charset: UTF-8
* using option ‘--no-stop-on-test-error’
* checking for file ‘modelbased/DESCRIPTION’ ... OK
* checking extension type ... Package
* this is package ‘modelbased’ version ‘0.9.0’
* package encoding: UTF-8
* checking package namespace information ... OK
* checking package dependencies ... OK
* checking if this is a source package ... OK
* checking if there is a namespace ... OK
* checking for executable files ... OK
* checking for hidden files and directories ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking whether package ‘modelbased’ can be installed ... [10s/18s] OK
See 'https://www.r-project.org/nosvn/R.check/r-devel-linux-x86_64-fedora-clang/modelbased-00install.html' for details.
* checking installed package size ... OK
* checking package directory ... OK
* checking ‘build’ directory ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking for left-over files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking code files for non-ASCII characters ... OK
* checking R files for syntax errors ... OK
* checking whether the package can be loaded ... OK
* checking whether the package can be loaded with stated dependencies ... OK
* checking whether the package can be unloaded cleanly ... OK
* checking whether the namespace can be loaded with stated dependencies ... OK
* checking whether the namespace can be unloaded cleanly ... OK
* checking loading without being on the library search path ... OK
* checking whether startup messages can be suppressed ... OK
* checking use of S3 registration ... OK
* checking dependencies in R code ... OK
* checking S3 generic/method consistency ... OK
* checking replacement functions ... OK
* checking foreign function calls ... OK
* checking R code for possible problems ... [25s/73s] OK
* checking Rd files ... OK
* checking Rd metadata ... OK
* checking Rd line widths ... OK
* checking Rd cross-references ... OK
* checking for missing documentation entries ... OK
* checking for code/documentation mismatches ... OK
* checking Rd \usage sections ... OK
* checking Rd contents ... OK
* checking for unstated dependencies in examples ... OK
* checking contents of ‘data’ directory ... OK
* checking data for non-ASCII characters ... OK
* checking LazyData ... OK
* checking data for ASCII and uncompressed saves ... OK
* checking installed files from ‘inst/doc’ ... OK
* checking files in ‘vignettes’ ... OK
* checking examples ... [36s/110s] OK
* checking for unstated dependencies in ‘tests’ ... OK
* checking tests ... [91s/135s] ERROR
  Running ‘testthat.R’ [91s/133s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
  > # This file is part of the standard setup for testthat.
  > # It is recommended that you do not modify it.
  > #
  > # Where should you do additional test configuration?
  > #
  > # * https://r-pkgs.org/tests.html
  > # * https://testthat.r-lib.org/reference/test_package.html#special-files
  > library(testthat)
  > library(modelbased)
  > 
  > test_check("modelbased")
  Starting 2 test processes
  [ FAIL 4 | WARN 22 | SKIP 17 | PASS 165 ]
  
  ══ Skipped tests (17) ══════════════════════════════════════════════════════════
  • .Platform$OS.type == "windows" is not TRUE (1):
    'test-estimate_predicted.R:56:3'
  • On CRAN (13): 'test-brms-marginaleffects.R:1:1', 'test-brms.R:1:1',
    'test-estimate_contrasts.R:1:1', 'test-estimate_contrasts_methods.R:1:1',
    'test-estimate_means.R:1:1', 'test-estimate_means_counterfactuals.R:1:1',
    'test-estimate_means_mixed.R:1:1', 'test-g_computation.R:1:1',
    'test-get_marginaltrends.R:1:1', 'test-glmmTMB.R:1:1', 'test-ordinal.R:1:1',
    'test-predict-dpar.R:1:1', 'test-vcov.R:1:1'
  • On Linux (3): 'test-plot-facet.R:1:1', 'test-plot.R:1:1', 'test-print.R:1:1'
  
  ══ Failed tests ════════════════════════════════════════════════════════════════
  ── Failure ('test-estimate_expectation.R:49:3'): estimate_expectation - data-grid ──
  dim(estim) (`actual`) not identical to c(10L, 5L) (`expected`).
  
    `actual`:  3 5
  `expected`: 10 5
  ── Failure ('test-estimate_predicted.R:149:3'): estimate_expectation - Frequentist ──
  dim(estim) (`actual`) not equal to c(10, 6) (`expected`).
  
    `actual`:  3.0 6.0
  `expected`: 10.0 6.0
  ── Failure ('test-estimate_predicted.R:155:3'): estimate_expectation - Frequentist ──
  dim(estim) (`actual`) not equal to c(10, 6) (`expected`).
  
    `actual`:  3.0 6.0
  `expected`: 10.0 6.0
  ── Failure ('test-estimate_predicted.R:204:3'): estimate_expectation - predicting RE works ──
  out$Predicted (`actual`) not equal to c(...) (`expected`).
  
    `actual`: 12.2617 12.0693 11.1560 11.6318 11.1657 10.3811 11.1074 11.0749
  `expected`: 12.2064 12.0631 11.2071 11.6286 11.2327 10.5839 11.2085 11.1229
  
  [ FAIL 4 | WARN 22 | SKIP 17 | PASS 165 ]
  Error: Test failures
  Execution halted
* checking for unstated dependencies in vignettes ... OK
* checking package vignettes ... OK
* checking re-building of vignette outputs ... OK
* checking PDF version of manual ... [9s/25s] OK
* checking HTML version of manual ... OK
* checking for non-standard things in the check directory ... OK
* checking for detritus in the temp directory ... OK
* DONE
Status: 1 ERROR