Discussion:
RPi 4 build time
Evgeniy Khramtsov via freebsd-arm
2021-05-21 20:07:35 UTC
Permalink
Hi.

How long are compile times for aarch64 8 GB RPi? It is especially
interesting to know about overclocked results. I guess buildworld time
would describe it well, but any heavy port (ex. rust) would also be great.
Mark Millard via freebsd-arm
2021-05-21 20:58:36 UTC
Permalink
Post by Evgeniy Khramtsov via freebsd-arm
How long are compile times for aarch64 8 GB RPi? It is especially
interesting to know about overclocked results. I guess buildworld time
would describe it well, but any heavy port (ex. rust) would also be great.
To get useful figures may require specification of more context.
For example, building rust uses over 17 GiBytes of temporary file
space. This suggests that its build time may be very dependent on
the media in use. Also the configuration of building ports in
parallel and/or allowing a port builder to potentially have an
ready-to-run process for each of the 4 cores makes for large
differences.

Also, for buildworld, there is a large difference for when a
bootstrap set of clang/llvm materials is built vs. when that
extra clang/llvm material does not build (even if the normal
llvm material are built).

There are also issues like if ccache is in use and is providing
a signficant amount of hot-cache results. (I do not use ccache.)
The same sort of thing apply to META_MODE use and rebuilds: was
the build "from scratch" vs. just a partial rebuild? (I do use
META_MODE.)

I've not done lang/rust builds on a RPi4B. (And, for ports, I
normally allow multiple ports to build at once, each allowed
to have a ready-to-run process per core.)

As for buildworld/buildkernel "from scratch", that I have done
and recorded some figures:

Context:

make[1]: "/usr/fbsd/mm-src/Makefile.inc1" line 339: SYSTEM_COMPILER: Determined that CC=cc matches the source tree. Not bootstrapping a cross-compiler.
make[1]: "/usr/fbsd/mm-src/Makefile.inc1" line 344: SYSTEM_LINKER: Determined that LD=ld matches the source tree. Not bootstrapping a cross-linker.

I use a USB3 SSD to hold the UFS file system, swap space,
and the msdos file system used in booting. No microsd card
use at all. The USB3 SSD seems to be fairly effective at
making the storage-I/O performant for the RPi4B context.

And oddity of my context is I have the code generation set
up to tune for cortex-a72 specifically. Both the system doing
the build and the built system were based on such tuning.

ENVIRONMENT: -mcpu=cortex-a72 RPi4B @ 2000 MHz, hw.physmem:8464072704 :
( arm_freq=2000, sdram_freq_min=3200, force_turbo=1 )

World build completed on Fri Mar 26 19:10:11 PDT 2021
World built in 22491 seconds, ncpu: 4, make -j4
Kernel build for GENERIC-NODBG completed on Fri Mar 26 19:38:33 PDT 2021
Kernel(s) GENERIC-NODBG built in 1702 seconds, ncpu: 4, make -j4

So World+Kernel took somewhat under 6 hrs 45 min to build.

# ~/fbsd-based-on-what-freebsd-main.sh
merge-base: 7381bbee29df959e88ec59866cf2878263e7f3b2
merge-base: CommitDate: 2021-03-12 20:29:42 +0000
def0058cc690 (HEAD -> mm-src) mm-src snapshot for mm's patched build in git context.
7381bbee29df (freebsd/main, freebsd/HEAD, pure-src, main) cam: Run all XPT_ASYNC ccbs in a dedicated thread
FreeBSD RPi4B 14.0-CURRENT FreeBSD 14.0-CURRENT mm-src-n245445-def0058cc690 GENERIC-NODBG arm64 aarch64 1400005 1400005

I've gotten very similar time frames from builds that used
a ZFS file system on a USB3 SSD of the same type instead.

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
Mark Millard via freebsd-arm
2021-05-21 21:16:50 UTC
Permalink
Post by Mark Millard via freebsd-arm
Post by Evgeniy Khramtsov via freebsd-arm
How long are compile times for aarch64 8 GB RPi? It is especially
interesting to know about overclocked results. I guess buildworld time
would describe it well, but any heavy port (ex. rust) would also be great.
To get useful figures may require specification of more context.
For example, building rust uses over 17 GiBytes of temporary file
space. This suggests that its build time may be very dependent on
the media in use. Also the configuration of building ports in
parallel and/or allowing a port builder to potentially have an
ready-to-run process for each of the 4 cores makes for large
differences.
Also, for buildworld, there is a large difference for when a
bootstrap set of clang/llvm materials is built vs. when that
extra clang/llvm material does not build (even if the normal
llvm material are built).
There are also issues like if ccache is in use and is providing
a signficant amount of hot-cache results. (I do not use ccache.)
The same sort of thing apply to META_MODE use and rebuilds: was
the build "from scratch" vs. just a partial rebuild? (I do use
META_MODE.)
I've not done lang/rust builds on a RPi4B. (And, for ports, I
normally allow multiple ports to build at once, each allowed
to have a ready-to-run process per core.)
As for buildworld/buildkernel "from scratch", that I have done
make[1]: "/usr/fbsd/mm-src/Makefile.inc1" line 339: SYSTEM_COMPILER: Determined that CC=cc matches the source tree. Not bootstrapping a cross-compiler.
make[1]: "/usr/fbsd/mm-src/Makefile.inc1" line 344: SYSTEM_LINKER: Determined that LD=ld matches the source tree. Not bootstrapping a cross-linker.
I use a USB3 SSD to hold the UFS file system, swap space,
and the msdos file system used in booting. No microsd card
use at all. The USB3 SSD seems to be fairly effective at
making the storage-I/O performant for the RPi4B context.
And oddity of my context is I have the code generation set
up to tune for cortex-a72 specifically. Both the system doing
the build and the built system were based on such tuning.
( arm_freq=2000, sdram_freq_min=3200, force_turbo=1 )
World build completed on Fri Mar 26 19:10:11 PDT 2021
World built in 22491 seconds, ncpu: 4, make -j4
Kernel build for GENERIC-NODBG completed on Fri Mar 26 19:38:33 PDT 2021
Kernel(s) GENERIC-NODBG built in 1702 seconds, ncpu: 4, make -j4
So World+Kernel took somewhat under 6 hrs 45 min to build.
# ~/fbsd-based-on-what-freebsd-main.sh
merge-base: 7381bbee29df959e88ec59866cf2878263e7f3b2
merge-base: CommitDate: 2021-03-12 20:29:42 +0000
def0058cc690 (HEAD -> mm-src) mm-src snapshot for mm's patched build in git context.
7381bbee29df (freebsd/main, freebsd/HEAD, pure-src, main) cam: Run all XPT_ASYNC ccbs in a dedicated thread
FreeBSD RPi4B 14.0-CURRENT FreeBSD 14.0-CURRENT mm-src-n245445-def0058cc690 GENERIC-NODBG arm64 aarch64 1400005 1400005
I've gotten very similar time frames from builds that used
a ZFS file system on a USB3 SSD of the same type instead.
I forgot to mention that for the buildworld/buildkernel
both the running system and the built materials were
non-debug builds, despite the build being of main [so:
14].


===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
Evgeniy Khramtsov via freebsd-arm
2021-05-21 21:45:13 UTC
Permalink
Post by Mark Millard via freebsd-arm
World built in 22491 seconds, ncpu: 4, make -j4
6 hours 45 minutes
This is impressing considering that one old Athlon 64 space heater took more
than 9 hours to build FreeBSD 12 in 2017 when CC=cc, LD=ld matched the src tree.

Thanks for these results.

Couldn't reply directly to your mail directly because of some issue.
Mark Millard via freebsd-arm
2021-05-21 22:13:01 UTC
Permalink
Post by Evgeniy Khramtsov via freebsd-arm
Post by Mark Millard via freebsd-arm
World built in 22491 seconds, ncpu: 4, make -j4
6 hours 45 minutes
This is impressing considering that one old Athlon 64 space heater took more
than 9 hours to build FreeBSD 12 in 2017 when CC=cc, LD=ld matched the src tree.
I probably should have noted that to use:

arm_freq=2000
sdram_freq_min=3200
force_turbo=1

reliably, I also use:

over_voltage=6

(I've not tried to identify the minimum, just a
sufficient figure.)

I'll also note that:

https://www.raspberrypi.org/documentation/hardware/raspberrypi/revision-codes/README.md

documents in its note 2 that: "Warranty bit is never set on Pi4."
So, even more extreme combinations of force_turbo=1 and over_voltage
would not invalidate the warranty.

The 7 or so RPi4B's that I've had access to (four 8 GiByte, the rest
4 GiByte) all worked well with the combination indicated. I've not
tried to optimize to individual machine limits.

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
tech-lists
2021-05-21 22:26:00 UTC
Permalink
Hi,
Post by Evgeniy Khramtsov via freebsd-arm
How long are compile times for aarch64 8 GB RPi? It is especially
interesting to know about overclocked results. I guess buildworld time
would describe it well, but any heavy port (ex. rust) would also be great.
It depends. I've got it down to about 4 1/2 hrs for the
buildworld/buildkernel steps. But this is after all this has been done:

for stable/13:

1. configuration and use of devel/ccache-static
2. clocking to 2.0 GHz with the following config.txt:

[...]
% less /boot/msdos/config.txt
arm_control=0x200
dtparam=audio=on,i2c_arm=on,spi=on
dtoverlay=mmc
dtoverlay=pwm
dtoverlay=disable-bt
device_tree_address=0x4000
kernel=u-boot.bin
over_voltage=6
arm_freq=2000
sdram_freq_min=3200

*make SURE you have good cooling!!!!!* I have a flirc rpi4 case on this
one.

3. /usr/obj /usr/src and /var/cache/ccache on zfs on usb3-connected
spinning rust

4. /tmp as tmpfs (512mb)

5. make -j6 buildworld && make -j6 buildkernel (after make -j10 cleanworld
&& make -j10 cleandir && make -j10 clean)

6. having *already built* a new world and kernel and installed it all and
rebooted, which had been built with the following /etc/src.conf :

[...]
WITH_MALLOC_PRODUCTION=
WITHOUT_DEBUG_FILES=
WITH_CCACHE_BUILD=
WITH_OPENSSL_KTLS=

WITHOUT_APM=
WITHOUT_ASSERT_DEBUG=
WITHOUT_BLUETOOTH=
WITHOUT_CUSE=
WITHOUT_DICT=
WITHOUT_DMAGENT=
WITHOUT_FLOPPY=
WITHOUT_FREEBSD_UPDATE=
WITHOUT_HAST=
WITHOUT_IPFILTER=
WITHOUT_IPFW=
WITHOUT_ISCSI=
WITHOUT_KERNEL_SYMBOLS=
WITHOUT_LLVM_TARGET_ALL=
WITH_LLVM_TARGET_AARCH64=
WITH_LLVM_TARGET_ARM=
WITHOUT_LPR=
WITHOUT_NDIS=
WITHOUT_NETGRAPH=
WITHOUT_NIS=
WITHOUT_OFED=
WITHOUT_PORTSNAP=
WITHOUT_PPP=
WITHOUT_RADIUS_SUPPORT=
WITH_RATELIMIT=
WITHOUT_RBOOTD=
WITHOUT_ROUTED=
WITH_SORT_THREADS=
WITH_SVN=
WITHOUT_TALK=
WITHOUT_TESTS=
WITHOUT_TFTP=
WITHOUT_UNBOUND=
#
CFLAGS.clang+= -mcpu=cortex-a72
CXXFLAGS.clang+= -mcpu=cortex-a72
CPPFLAGS.clang+= -mcpu=cortex-a72
ACFLAGS.arm64cpuid.S+= -mcpu=cortex-a72+crypto
ACFLAGS.aesv8-armx.S+= -mcpu=cortex-a72+crypto
ACFLAGS.ghashv8-armx.S+= -mcpu=cortex-a72+crypto

(and afterwards, make check-old (then) yes | make delete-old then yes|
make delete-old-libs) then

7. with the following in /etc/sysctl.conf :
vfs.read_max=128

With regard to building ports (I use poudriere-devel) with jobs=4 I see
the following build times for the largest five ports built subsequently:

rust-1.51.0 took 7hrs 46mins
doxygen-1.9.1,2 took 1hr 36mins
texlive-texmf-20150523_4 took 1hr 36mins
llvm10-10.0.1_5 took 1hr 4mins
binutils-2.33.1_4,1 took 57mins 37s

The poudriere jail instance for this rpi4 uses the same /usr/src as what
has built the OS. This means it was built with the same /etc/src.conf
parameters.

My other rpi4 (runs main/14, currently I'm testing it) will clock to 2.1GHz.
I've not thoroughly tested buildtimes there yet.

I forgot to mention both my stable/13 rpi4 and main/14rpi4 run powerd
with these lines in /etc/rc.conf:

powerd_enable="YES"
powerd_flags="-r 1"
--
J.
Mark Millard via freebsd-arm
2021-05-21 22:51:35 UTC
Permalink
This post might be inappropriate. Click to display it.
tech-lists
2021-05-22 16:01:10 UTC
Permalink
Post by Mark Millard via freebsd-arm
So, if I read this right, you are reporting 4.5 hrs
for a "hot ccache" result, which I had mentioned as
one of the things leading to large variations in
reported build times.
Hi,

not sure what you mean by "hot cache" - I always use devel/ccache-static
as have tended to build from source throughout my time of using freebsd.
It provides tremendous speedups and generally i'll disable it only if a
problem arises and am debugging it, or crossing a version boundary like
from stable to current. What I'm saying is I don't know when ccache was
last used for building anything.

1. rpi4 here is clocked to 2.0GHz
2. ccache is in use and /var/cache/ccache has *not* been previously cleared
(i'll clear it for next test)

3. make cleanworld cleandir clean has been run on /usr/src
4. sources are at 246839

5. this rpi4 has the following properties for its disk:
[i] root-on-zfs
[ii] boot-to-usb3
[iii] 4k sectorsize forced
[iv] encrypted swapspace
[v] entire filesystem encryption

/etc/src.conf is
https://cloud.zyxst.net/~john/FreeBSD/rpi4-main/src.conf

make -j10 cleanworld started on Sat May 22 15:41:58 BST 2021
make -j10 cleanworld completed on Sat May 22 15:43:23 BST 2021

make -j10 cleandir started on Sat May 22 15:43:23 BST 2021
make -j10 cleandir completed on Sat May 22 15:43:50 BST 2021

make -j10 clean started on Sat May 22 15:43:50 BST 2021
make -j10 clean completed on Sat May 22 15:44:11 BST 2021

make -j6 buildworld started on Sat May 22 15:44:11 BST 2021
make -j6 buildworld completed on Sat May 22 16:20:48 BST 2021

make -j6 buildkernel started on Sat May 22 16:20:48 BST 2021
make -j6 buildkernel completed on Sat May 22 16:49:18 BST 2021
--
J.
Mark Millard via freebsd-arm
2021-05-22 20:12:23 UTC
Permalink
Post by tech-lists
Post by Mark Millard via freebsd-arm
So, if I read this right, you are reporting 4.5 hrs
for a "hot ccache" result, which I had mentioned as
one of the things leading to large variations in
reported build times.
Hi,
not sure what you mean by "hot cache"
The first time ccache is used it has no prior results
to use to avoid compiles/links: an empty cache (a form
of "cold" cache). Another form of "cold" cache could
result from changing compiler options that would change
the code generated for (nearly) every file produced so
that the cache becomes ineffective.

"hot" refers to having a significant amount of
"effective/used cache content" that makes a notable
difference in the build times. I'm not that impressed
with the terminology but it is was I've seen used the
most frequently for ccache. So I used it.
Post by tech-lists
- I always use devel/ccache-static
as have tended to build from source throughout my time of using freebsd.
It provides tremendous speedups and generally i'll disable it only if a
problem arises and am debugging it, or crossing a version boundary like
from stable to current. What I'm saying is I don't know when ccache was
last used for building anything.
I'm confused how you can know it "provides tremendous
speedups" while simultaneously not knowing "when ccache
was last used for building anything". It sounds like you
think the 4.5 hr build might have not have been from
having a notable speed up from ccache?

Remember that when comparing to my "from scratch"
build times: in my build everything was compiled
and linked, no prior build materials around to be
reused. So I'm reporting a context where I know
how to interpret the result and I'm presenting
enough history to establish a repeatable context.
Post by tech-lists
1. rpi4 here is clocked to 2.0GHz
2. ccache is in use and /var/cache/ccache has *not* been previously cleared
(i'll clear it for next test)
3. make cleanworld cleandir clean has been run on /usr/src
4. sources are at 246839
[i] root-on-zfs
[ii] boot-to-usb3
[iii] 4k sectorsize forced
[iv] encrypted swapspace
[v] entire filesystem encryption
FYI: My build-experiment boot media are never
encrypted for the file system or swap/paging
space. Another thing I'd not thought to comment
on. As I've reported, my UFS based and ZFS based
experiments get only minor variations in
build times (variations of minutes for from-
scratch builds that take hours).
Post by tech-lists
/etc/src.conf is
https://cloud.zyxst.net/~john/FreeBSD/rpi4-main/src.conf
make -j10 cleanworld started on Sat May 22 15:41:58 BST 2021
make -j10 cleanworld completed on Sat May 22 15:43:23 BST 2021
make -j10 cleandir started on Sat May 22 15:43:23 BST 2021
make -j10 cleandir completed on Sat May 22 15:43:50 BST 2021
make -j10 clean started on Sat May 22 15:43:50 BST 2021
make -j10 clean completed on Sat May 22 15:44:11 BST 2021
make -j6 buildworld started on Sat May 22 15:44:11 BST 2021
make -j6 buildworld completed on Sat May 22 16:20:48 BST 2021
So between 36 min and 37 min to rebuild the same version
with the same build options and compiler/link command lines
(near[?] maximal effective-ccache content that leads to
near[?] maximal avoidance of rebuild activity).

Cool.

For META_MODE builds, seeing how long it takes to go through
and discover that little or nothing needs to be rebuild would
be the build times for 2nd build from doing back-to-back builds
(not even an install to the live system between). The META_MODE
use would then prevent most rebuild activity. I've not done
such a timing in a long time and it does not approximate any
normal build time for my typical rebuild patterns. So I do not
normally time that.

I'm not claiming META_MODE is similarly effective to ccache.
In fact, I know of issues where META_MODE rebuilds files that
ccache would avoid rebuilding the same file: for example,
doing an install of a build to the live system between the
rebuilds has side effects that lead META_MODE to rebuild far
more things.
Post by tech-lists
make -j6 buildkernel started on Sat May 22 16:20:48 BST 2021
make -j6 buildkernel completed on Sat May 22 16:49:18 BST 2021
So between 28 min and 29 min to rebuild the same version with
the same build options and compiler/link command lines
(near[?] maximal effective-ccache content).

Total between 64 min and 66 min overall for buildworld buildkernel
for the near[?] maximal effective-ccache content and needing all
the files.

Good to know. Thanks.

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
Mark Millard via freebsd-arm
2021-05-22 23:51:31 UTC
Permalink
Post by Mark Millard via freebsd-arm
Post by tech-lists
Post by Mark Millard via freebsd-arm
So, if I read this right, you are reporting 4.5 hrs
for a "hot ccache" result, which I had mentioned as
one of the things leading to large variations in
reported build times.
Hi,
not sure what you mean by "hot cache"
The first time ccache is used it has no prior results
to use to avoid compiles/links: an empty cache (a form
of "cold" cache). Another form of "cold" cache could
result from changing compiler options that would change
the code generated for (nearly) every file produced so
that the cache becomes ineffective.
"hot" refers to having a significant amount of
"effective/used cache content" that makes a notable
difference in the build times. I'm not that impressed
with the terminology but it is was I've seen used the
most frequently for ccache. So I used it.
Post by tech-lists
- I always use devel/ccache-static
as have tended to build from source throughout my time of using freebsd.
It provides tremendous speedups and generally i'll disable it only if a
problem arises and am debugging it, or crossing a version boundary like
from stable to current. What I'm saying is I don't know when ccache was
last used for building anything.
I'm confused how you can know it "provides tremendous
speedups" while simultaneously not knowing "when ccache
was last used for building anything". It sounds like you
think the 4.5 hr build might have not have been from
having a notable speed up from ccache?
Remember that when comparing to my "from scratch"
build times: in my build everything was compiled
and linked, no prior build materials around to be
reused. So I'm reporting a context where I know
how to interpret the result and I'm presenting
enough history to establish a repeatable context.
Post by tech-lists
1. rpi4 here is clocked to 2.0GHz
2. ccache is in use and /var/cache/ccache has *not* been previously cleared
(i'll clear it for next test)
3. make cleanworld cleandir clean has been run on /usr/src
4. sources are at 246839
[i] root-on-zfs
[ii] boot-to-usb3
[iii] 4k sectorsize forced
[iv] encrypted swapspace
[v] entire filesystem encryption
FYI: My build-experiment boot media are never
encrypted for the file system or swap/paging
space. Another thing I'd not thought to comment
on. As I've reported, my UFS based and ZFS based
experiments get only minor variations in
build times (variations of minutes for from-
scratch builds that take hours).
Post by tech-lists
/etc/src.conf is
https://cloud.zyxst.net/~john/FreeBSD/rpi4-main/src.conf
make -j10 cleanworld started on Sat May 22 15:41:58 BST 2021
make -j10 cleanworld completed on Sat May 22 15:43:23 BST 2021
make -j10 cleandir started on Sat May 22 15:43:23 BST 2021
make -j10 cleandir completed on Sat May 22 15:43:50 BST 2021
make -j10 clean started on Sat May 22 15:43:50 BST 2021
make -j10 clean completed on Sat May 22 15:44:11 BST 2021
make -j6 buildworld started on Sat May 22 15:44:11 BST 2021
make -j6 buildworld completed on Sat May 22 16:20:48 BST 2021
So between 36 min and 37 min to rebuild the same version
with the same build options and compiler/link command lines
(near[?] maximal effective-ccache content that leads to
near[?] maximal avoidance of rebuild activity).
Cool.
For META_MODE builds, seeing how long it takes to go through
and discover that little or nothing needs to be rebuild would
be the build times for 2nd build from doing back-to-back builds
(not even an install to the live system between). The META_MODE
use would then prevent most rebuild activity. I've not done
such a timing in a long time and it does not approximate any
normal build time for my typical rebuild patterns. So I do not
normally time that.
I'm not claiming META_MODE is similarly effective to ccache.
In fact, I know of issues where META_MODE rebuilds files that
ccache would avoid rebuilding the same file: for example,
doing an install of a build to the live system between the
rebuilds has side effects that lead META_MODE to rebuild far
more things.
Post by tech-lists
make -j6 buildkernel started on Sat May 22 16:20:48 BST 2021
make -j6 buildkernel completed on Sat May 22 16:49:18 BST 2021
So between 28 min and 29 min to rebuild the same version with
the same build options and compiler/link command lines
(near[?] maximal effective-ccache content).
Total between 64 min and 66 min overall for buildworld buildkernel
for the near[?] maximal effective-ccache content and needing all
the files.
Good to know. Thanks.
I happen to have ended up with an opportunity to do
(no cleanout of old results after the first rebuild,
no installationa of any of the builds):

rebuild world
reboot
rebuild world

The 2nd rebuild of world got:

World built in 354 seconds, ncpu: 4, make -j4

So a little under 6 minutes via META_MODE. META_MODE
does end up causing some rebuild activity, just not
much. Much of it is re-linking.

I did another "rebuild world" without a new reboot and
got:

World built in 293 seconds, ncpu: 4, make -j4

So, somewhat under 5 minutes for more context cached
in RAM.


A similar sequence for a debug build instead of non-debug
build (building machine running non-debug) got:

World built in 526 seconds, ncpu: 4, make -j4

So, somewhat under 9 minutes.

Then (no reboot between):

World built in 296 seconds, ncpu: 4, make -j4

So, somewhat under 5 minutes again.


In general these figures are approximations of the low
bound on a buildworld that is a (near) no-op but is
not frequently approached in my normal activity. But
it is rare for me to update the source tree again
and rebuild after only a few source commits after
what was originally rebuilt. For such, sub-half hour
rebuilds can certainly occur via META_MODE use.

The context happened to be the ZFS based one in all
cases. Still no ccache use.

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
tech-lists
2021-05-23 01:23:31 UTC
Permalink
Post by Mark Millard via freebsd-arm
In general these figures are approximations of the low
bound on a buildworld that is a (near) no-op but is
not frequently approached in my normal activity. But
it is rare for me to update the source tree again
and rebuild after only a few source commits after
what was originally rebuilt. For such, sub-half hour
rebuilds can certainly occur via META_MODE use.
The context happened to be the ZFS based one in all
cases. Still no ccache use.
That's wild. I have to look at meta mode.

My use case though mostly involves building/updating ports with
poudriere, and I'm happy it can use ccache.

Am I right in thinking meta mode is a buildworld/kernel thing only? I've
only heard of it; I know nothing about it.
--
J.
Mark Millard via freebsd-arm
2021-05-23 01:36:43 UTC
Permalink
Post by tech-lists
Post by Mark Millard via freebsd-arm
In general these figures are approximations of the low
bound on a buildworld that is a (near) no-op but is
not frequently approached in my normal activity. But
it is rare for me to update the source tree again
and rebuild after only a few source commits after
what was originally rebuilt. For such, sub-half hour
rebuilds can certainly occur via META_MODE use.
The context happened to be the ZFS based one in all
cases. Still no ccache use.
That's wild. I have to look at meta mode.
My use case though mostly involves building/updating ports with
poudriere, and I'm happy it can use ccache.
Am I right in thinking meta mode is a buildworld/kernel thing only? I've
only heard of it; I know nothing about it.
Yep: buildworld buildkernel only.

META_MODE does not help for after a "rm -rf /usr/obj/*"
sort of clean-out. It just attempts to avoid rebuilding
materials already present that are sufficient. (It still
builds more than is strictly necessary: Some of the
dependency tracking tracks things that do not actually
imply needing a file rebuild. This is why installworld
to the live system ends up leading to a larger rebuild
later.)

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
Mark Millard via freebsd-arm
2021-05-23 02:05:56 UTC
Permalink
Post by Mark Millard via freebsd-arm
Post by tech-lists
Post by Mark Millard via freebsd-arm
In general these figures are approximations of the low
bound on a buildworld that is a (near) no-op but is
not frequently approached in my normal activity. But
it is rare for me to update the source tree again
and rebuild after only a few source commits after
what was originally rebuilt. For such, sub-half hour
rebuilds can certainly occur via META_MODE use.
The context happened to be the ZFS based one in all
cases. Still no ccache use.
That's wild. I have to look at meta mode.
My use case though mostly involves building/updating ports with
poudriere, and I'm happy it can use ccache.
Am I right in thinking meta mode is a buildworld/kernel thing only? I've
only heard of it; I know nothing about it.
Yep: buildworld buildkernel only.
META_MODE does not help for after a "rm -rf /usr/obj/*"
sort of clean-out. It just attempts to avoid rebuilding
materials already present that are sufficient. (It still
builds more than is strictly necessary: Some of the
dependency tracking tracks things that do not actually
imply needing a file rebuild. This is why installworld
to the live system ends up leading to a larger rebuild
later.)
I should have also mentioned the other side of
META_MODE: It is there to also be sure to rebuild
things that do need to be rebuilt. Its rebuilding
more than necessary generally avoids ending up with
insufficient/inaccurate rebuilds. Between ending
up with false positives vs. false negatives, it
has a definite bias.


===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
Mark Millard via freebsd-arm
2021-05-26 22:42:40 UTC
Permalink
Post by Mark Millard via freebsd-arm
Post by Mark Millard via freebsd-arm
Post by tech-lists
Post by Mark Millard via freebsd-arm
In general these figures are approximations of the low
bound on a buildworld that is a (near) no-op but is
not frequently approached in my normal activity. But
it is rare for me to update the source tree again
and rebuild after only a few source commits after
what was originally rebuilt. For such, sub-half hour
rebuilds can certainly occur via META_MODE use.
The context happened to be the ZFS based one in all
cases. Still no ccache use.
That's wild. I have to look at meta mode.
My use case though mostly involves building/updating ports with
poudriere, and I'm happy it can use ccache.
Am I right in thinking meta mode is a buildworld/kernel thing only? I've
only heard of it; I know nothing about it.
Yep: buildworld buildkernel only.
META_MODE does not help for after a "rm -rf /usr/obj/*"
sort of clean-out. It just attempts to avoid rebuilding
materials already present that are sufficient. (It still
builds more than is strictly necessary: Some of the
dependency tracking tracks things that do not actually
imply needing a file rebuild. This is why installworld
to the live system ends up leading to a larger rebuild
later.)
I should have also mentioned the other side of
META_MODE: It is there to also be sure to rebuild
things that do need to be rebuilt. Its rebuilding
more than necessary generally avoids ending up with
insufficient/inaccurate rebuilds. Between ending
up with false positives vs. false negatives, it
has a definite bias.
An example use of META_MODE: The buildworld buildkernel
to be used for updating 13.0-RELEASE based to to
13.0-RELEASE-p1 based (old build still around to
start from):

World build completed on Wed May 26 14:52:43 PDT 2021
World built in 612 seconds, ncpu: 4, make -j4

Kernel build for GENERIC-NODBG-CA72 completed on Wed May 26 15:20:36 PDT 2021
Kernel(s) GENERIC-NODBG-CA72 built in 1673 seconds, ncpu: 4, make -j4


It shows a mix: buildworld did not have much to
rebuild but buildkernel did have a lot to build.
Overall: somewhat under 40 minutes for buildworld
buildkernel to complete.


After installing and rebooting, my 13_0R-CA72-nodbg
boot environment is at:

# uname -apKU
FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p1 FreeBSD 13.0-RELEASE-p1 #1 releng/13.0-n244744-8023e729a521-dirty: Wed May 26 15:20:08 PDT 2021 ***@CA72_4c8G_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/arm64.aarch64/sys/GENERIC-NODBG-CA72 arm64 aarch64 1300139 1300139

# ~/fbsd-based-on-what-commit.sh
branch: releng/13.0
merge-base: 8023e729a52192f89e539de760df194a70a91fda
merge-base: CommitDate: 2021-05-26 20:36:52 +0000
8023e729a521 (HEAD -> releng/13.0, freebsd/releng/13.0) Add UPDATING entries and bump version
n244744 (--first-parent --count for merge-base)


(buildworld buildkernel was via a ssh session. In
some contexts using the serial console causes more
time to be taken, just to display all the output
text during the activity. installworld is an
example for my contexts.)


===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)

tech-lists
2021-05-23 01:12:23 UTC
Permalink
Post by Mark Millard via freebsd-arm
of "cold" cache). Another form of "cold" cache could
result from changing compiler options that would change
the code generated for (nearly) every file produced so
that the cache becomes ineffective.
"hot" refers to having a significant amount of
"effective/used cache content" that makes a notable
difference in the build times. I'm not that impressed
with the terminology but it is was I've seen used the
most frequently for ccache. So I used it.
OK
Post by Mark Millard via freebsd-arm
I'm confused how you can know it "provides tremendous
speedups" while simultaneously not knowing "when ccache
was last used for building anything".
what I meant was "I'm not sure of the last time I built
anything that used ccache" or, more accurately, "I can't remember the
last time I built anything on that machine" because some building uses
ccache, others not. But I know that buildworld and friends use ccache.
Post by Mark Millard via freebsd-arm
Remember that when comparing to my "from scratch"
build times: in my build everything was compiled
and linked, no prior build materials around to be
reused. So I'm reporting a context where I know
how to interpret the result and I'm presenting
enough history to establish a repeatable context.
OK I ran another build. Same sources.

1. rm -rf /usr/obj && mkdir /usr/obj
2. rm -rf /var/cache/ccache && mkdir /var/cache/ccache

then:

make -j10 cleanworld started on Sat May 22 19:10:02 BST 2021
make -j10 cleanworld completed on Sat May 22 19:10:02 BST 2021
#
make -j10 cleandir started on Sat May 22 19:10:02 BST 2021
make -j10 cleandir completed on Sat May 22 19:10:37 BST 2021
#
make -j10 clean started on Sat May 22 19:10:37 BST 2021
make -j10 clean completed on Sat May 22 19:10:58 BST 2021
#
make -j6 buildworld started on Sat May 22 19:10:58 BST 2021
make -j6 buildworld completed on Sun May 23 00:47:03 BST 2021
#
make -j6 buildkernel started on Sun May 23 00:47:03 BST 2021
make -j6 buildkernel completed on Sun May 23 01:20:31 BST 2021

so buildworld took 5hr 36min 5s and buildkernel 33min 28s from cold.
Post by Mark Millard via freebsd-arm
Post by tech-lists
make -j6 buildworld started on Sat May 22 15:44:11 BST 2021
make -j6 buildworld completed on Sat May 22 16:20:48 BST 2021
make -j6 buildkernel started on Sat May 22 16:20:48 BST 2021
make -j6 buildkernel completed on Sat May 22 16:49:18 BST 2021
36min 37s for make buildworld and 28min 30s for make buildkernel. This
is what I meant by "tremendous speedups". Other things get built on this
machine; it has a poudriere instance. So I guess anything using C or C++
will use the ccache when building. I might not know exactly *when*
unless I also know (when whatever the machine was compiling), it used
something that ccache could be used for.
--
J.
Mark Millard via freebsd-arm
2021-05-23 01:29:03 UTC
Permalink
Post by tech-lists
Post by Mark Millard via freebsd-arm
of "cold" cache). Another form of "cold" cache could
result from changing compiler options that would change
the code generated for (nearly) every file produced so
that the cache becomes ineffective.
"hot" refers to having a significant amount of
"effective/used cache content" that makes a notable
difference in the build times. I'm not that impressed
with the terminology but it is was I've seen used the
most frequently for ccache. So I used it.
OK
Post by Mark Millard via freebsd-arm
I'm confused how you can know it "provides tremendous
speedups" while simultaneously not knowing "when ccache
was last used for building anything".
what I meant was "I'm not sure of the last time I built
anything that used ccache" or, more accurately, "I can't remember the
last time I built anything on that machine" because some building uses
ccache, others not. But I know that buildworld and friends use ccache.
Post by Mark Millard via freebsd-arm
Remember that when comparing to my "from scratch"
build times: in my build everything was compiled
and linked, no prior build materials around to be
reused. So I'm reporting a context where I know
how to interpret the result and I'm presenting
enough history to establish a repeatable context.
OK I ran another build. Same sources.
1. rm -rf /usr/obj && mkdir /usr/obj
2. rm -rf /var/cache/ccache && mkdir /var/cache/ccache
make -j10 cleanworld started on Sat May 22 19:10:02 BST 2021
make -j10 cleanworld completed on Sat May 22 19:10:02 BST 2021
#
make -j10 cleandir started on Sat May 22 19:10:02 BST 2021
make -j10 cleandir completed on Sat May 22 19:10:37 BST 2021
#
make -j10 clean started on Sat May 22 19:10:37 BST 2021
make -j10 clean completed on Sat May 22 19:10:58 BST 2021
#
make -j6 buildworld started on Sat May 22 19:10:58 BST 2021
make -j6 buildworld completed on Sun May 23 00:47:03 BST 2021
#
make -j6 buildkernel started on Sun May 23 00:47:03 BST 2021
make -j6 buildkernel completed on Sun May 23 01:20:31 BST 2021
so buildworld took 5hr 36min 5s and buildkernel 33min 28s from cold.
So, in your kind of context, if it is sigifnicantly faster than
those figures, you can infer that buildworld and/or buildkernel
was using cache. (Presumes you are not also using META_MODE or
other such. Otherwise there would be multiple possibilities
for sources of avoiding some of the rebuild activity.)
Post by tech-lists
Post by Mark Millard via freebsd-arm
Post by tech-lists
make -j6 buildworld started on Sat May 22 15:44:11 BST 2021
make -j6 buildworld completed on Sat May 22 16:20:48 BST 2021
make -j6 buildkernel started on Sat May 22 16:20:48 BST 2021
make -j6 buildkernel completed on Sat May 22 16:49:18 BST 2021
36min 37s for make buildworld and 28min 30s for make buildkernel. This
is what I meant by "tremendous speedups". Other things get built on this
machine; it has a poudriere instance. So I guess anything using C or C++
will use the ccache when building. I might not know exactly *when*
unless I also know (when whatever the machine was compiling), it used
something that ccache could be used for.
Nice to have examples of both numbers. Thanks.

poudriere has /usr/local/etc/poudriere.conf.sample which
contains material about configuring poudreire for ccache
use:

# ccache support. Supply the path to your ccache cache directory.
# It will be mounted into the jail and be shared among all jails.
# It is recommended that extra ccache configuration be done with
# ccache -o rather than from the environment.
#CCACHE_DIR=/var/cache/ccache

# Static ccache support from host. This uses the existing
# ccache from the host in the build jail. This is useful for
# using ccache+memcached which cannot easily be bootstrapped
# otherwise. The path to the PREFIX where ccache was installed
# must be used here, and ccache must have been built statically.
# Note also that ccache+memcached will require network access
# which is normally disabled. Separately setting RESTRICT_NETWORKING=no
# may be required for non-localhost memcached servers.
#CCACHE_STATIC_PREFIX=/usr/local

and:

# List of packages that will always be allowed to use MAKE_JOBS
# regardless of ALLOW_MAKE_JOBS. This is useful for allowing ports
# which holdup the rest of the queue to build more quickly.
#ALLOW_MAKE_JOBS_PACKAGES="pkg ccache py*"

and:

# Define to yes to build and stage as a regular user
# Default: yes, unless CCACHE_DIR is set and CCACHE_DIR_NON_ROOT_SAFE is not
# set. Note that to use ccache with BUILD_AS_NON_ROOT you will need to
# use a non-shared CCACHE_DIR that is only built by PORTBUILD_USER and chowned
# to that user. Then set CCACHE_DIR_NON_ROOT_SAFE to yes.
#BUILD_AS_NON_ROOT=no

and:

# A list of directories to exclude from leftover and filesystem violation
# mtree checks. Ccache is used here as an example but is already
# excluded by default. There is no need to add it here unless a
# special configuration is used where it is a problem.
# Default: none
#LOCAL_MTREE_EXCLUDES="/usr/obj /var/tmp/ccache"

(Not that I've used such.) If I read that right, ccache is
not automatically used just because cache is installed.
Instead /usr/local/etc/poudriere.conf needs to be adjusted.

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
tech-lists
2021-05-23 01:32:06 UTC
Permalink
Post by Mark Millard via freebsd-arm
(Not that I've used such.) If I read that right, ccache is
not automatically used just because cache is installed.
Instead /usr/local/etc/poudriere.conf needs to be adjusted.
That's correct. My poudriere instance is configured to use it.
--
J.
Loading...