Optimizing Julia code
This session provides an introduction to optimizing Julia code. The examples are developed with Julia v1.9.3. You can download all files from the summer school website:
This website renders the content of optimizing_julia.jl
.
First, we install all required packages
import Pkg
Pkg.activate(@__DIR__)
Pkg.instantiate()
The markdown file is created from the source code using Literate.jl. You can create the markdown file via
using Literate
markdown(joinpath(@__DIR__, "optimizing_julia.jl");
Literate.= Literate.CommonMarkFlavor()) flavor
Basics
The Julia manual contains basic information about performance. In particular, you should be familiar with
- https://docs.julialang.org/en/v1/manual/performance-tips/
- https://docs.julialang.org/en/v1/manual/style-guide/
if you care about performance.
Example: A linear advection simulation
This is a classical partial differential equation (PDE) simulation setup. If you are not familiar with it, just ignore whatβs going on - but we need an example. This one is similar to several problematic cases I have seen in practice.
First, we generate initial data and store it in a file.
using HDF5
= range(-1.0, 1.0, length = 1000)
x = step(x)
dx h5open("initial_data.h5", "w") do io
= sinpi.(x)
u0 write_dataset(io, "x", collect(x))
write_dataset(io, "u0", u0)
end
Next, we write our simulation code as a function - as we have learned from the performance tips!
function simulate()
= h5open("initial_data.h5", "r") do io
x, u0 read_dataset(io, "x"), read_dataset(io, "u0")
end
= 0
t = u0
u while t < t_end
= u + dt * rhs(u)
u = t + dt
t end
return t, x, u, u0
end
function rhs(u)
= similar(u)
du for i in eachindex(u)
= i == firstindex(u) ? lastindex(u) : (i - 1)
im1 = -(u[i] - u[im1]) / dx
du[i] end
return du
end
Now, we can define our parameters, run a simulation, and plot the results.
using Plots, LaTeXStrings
= 2.5
t_end = 0.9 * dx
dt = simulate()
t, x, u, u0 plot(x, u0; label = L"u_0", xguide = L"x")
plot!(x, u; label = L"u")
Maybe you can already spot problems in the code above. Anyway, before you begin optimizing some code, you should test it and make sure itβs correct. Letβs assume this is the case here. You should write tests making sure that the code continues to work as expected while we optimize it. We will not do this here right now.
Profiling
First, we need to measure the performance of our code to see whether itβs already reasonable. For this, you can use @time
- or better BenchmarkTools.jl.
using BenchmarkTools
@benchmark simulate()
This shows quite a lot of allocations - typically a bad sign if you are working with numerical simulations.
There are also profiling tools available in Julia. Some of them are already loaded in the Visual Studio Code extension for Julia. If you prefer working in the REPL, you can install ProfileView.jl (@profview simulate()
) and PProf.jl (pprof()
after creating a profile via @profview
).
@profview simulate()
Task: Optimize the code!
You can find one improved version in the file optimizing_julia_solution.jl
.
Introduction to generic Julia code and AD
One of the strengths of Julia is that you can use quite a few modern tools like AD. However, you need to learn Julia a bit to use everything efficiently. Here, we concentrate on ForwardDiff.jl.
using ForwardDiff
using Statistics
function foo0(x)
= zeros(typeof(x), 100)
vector foo0!(vector, x)
end
function foo0!(vector, scalar)
for i in eachindex(vector)
= atan(i * scalar) / Ο
vector[i] end
for _ in 1:1000
for i in eachindex(vector)
= cos(vector[i])
vector[i] end
end
return mean(vector)
end
let x = 2 * Ο
@show foo0(x)
@show ForwardDiff.derivative(foo0, x)
@benchmark foo0($x)
end
The code above is basically a fixed point iteration. By doing some analysis, one could figure out that it should converge to the solution x
of x == cos(x)
, the βDottie numberβ. See https://en.wikipedia.org/wiki/Dottie_number
using SimpleNonlinearSolve
function dottie_number()
f(u, p) = cos(u) - u
= (0.0, 2.0)
bounds = IntervalNonlinearProblem(f, bounds)
prob = solve(prob, ITP())
sol return sol.u
end
dottie_number()
Next, we introduce a custom struct. Can you spot the problems?
struct MyData1
scalar
vectorend
function foo1(x)
= MyData1(x, zeros(typeof(x), 100))
data foo1!(data)
end
function foo1!(data)
= data
(; scalar, vector)
for i in eachindex(vector)
= atan(i * scalar) / Ο
vector[i] end
for _ in 1:1000
for i in eachindex(vector)
= cos(vector[i])
vector[i] end
end
return mean(vector)
end
let x = 2 * Ο
@show foo1(x)
@show ForwardDiff.derivative(foo1, x)
@benchmark foo1($x)
end
We can fix type instabilities by introducing types explicitly. But we lose the possibility to use AD this way!
struct MyData2
::Float64
scalar::Vector{Float64}
vectorend
function foo2(x)
= MyData2(x, zeros(typeof(x), 100))
data foo1!(data)
end
let x = 2 * Ο
@show foo2(x)
@benchmark foo2($x)
end
let x = 2 * Ο
derivative(foo2, x)
ForwardDiff.end
Thus, we need parametric types!
struct MyData3{T}
::T
scalar::Vector{T}
vectorend
function foo3(x)
= MyData3(x, zeros(typeof(x), 100))
data foo1!(data)
end
let x = 2 * Ο
@show foo3(x)
@show ForwardDiff.derivative(foo3, x)
@benchmark foo3($x)
end
This page was generated using Literate.jl.