Skip to content

Commit

Permalink
arm64: module/ftrace: intialize PLT at load time
Browse files Browse the repository at this point in the history
Currently we lazily-initialize a module's ftrace PLT at runtime when we
install the first ftrace call. To do so we have to apply a number of
sanity checks, transiently mark the module text as RW, and perform an
IPI as part of handling Neoverse-N1 erratum #1542419.

We only expect the ftrace trampoline to point at ftrace_caller() (AKA
FTRACE_ADDR), so let's simplify all of this by intializing the PLT at
module load time, before the module loader marks the module RO and
performs the intial I-cache maintenance for the module.

Thus we can rely on the module having been correctly intialized, and can
simplify the runtime work necessary to install an ftrace call in a
module. This will also allow for the removal of module_disable_ro().

Tested by forcing ftrace_make_call() to use the module PLT, and then
loading up a module after setting up ftrace with:

| echo ":mod:<module-name>" > set_ftrace_filter;
| echo function > current_tracer;
| modprobe <module-name>

Since FTRACE_ADDR is only defined when CONFIG_DYNAMIC_FTRACE is
selected, we wrap its use along with most of module_init_ftrace_plt()
with ifdeffery rather than using IS_ENABLED().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Torsten Duwe <duwe@suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Tested-by: Torsten Duwe <duwe@suse.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
  • Loading branch information
Mark Rutland committed Nov 6, 2019
1 parent bd8b21d commit f1a54ae
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 52 deletions.
55 changes: 14 additions & 41 deletions arch/arm64/kernel/ftrace.c
Original file line number Diff line number Diff line change
Expand Up @@ -73,9 +73,21 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)

if (offset < -SZ_128M || offset >= SZ_128M) {
#ifdef CONFIG_ARM64_MODULE_PLTS
struct plt_entry trampoline, *dst;
struct module *mod;

/*
* There is only one ftrace trampoline per module. For now,
* this is not a problem since on arm64, all dynamic ftrace
* invocations are routed via ftrace_caller(). This will need
* to be revisited if support for multiple ftrace entry points
* is added in the future, but for now, the pr_err() below
* deals with a theoretical issue only.
*/
if (addr != FTRACE_ADDR) {
pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
return -EINVAL;
}

/*
* On kernels that support module PLTs, the offset between the
* branch instruction and its target may legally exceed the
Expand All @@ -93,46 +105,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
if (WARN_ON(!mod))
return -EINVAL;

/*
* There is only one ftrace trampoline per module. For now,
* this is not a problem since on arm64, all dynamic ftrace
* invocations are routed via ftrace_caller(). This will need
* to be revisited if support for multiple ftrace entry points
* is added in the future, but for now, the pr_err() below
* deals with a theoretical issue only.
*
* Note that PLTs are place relative, and plt_entries_equal()
* checks whether they point to the same target. Here, we need
* to check if the actual opcodes are in fact identical,
* regardless of the offset in memory so use memcmp() instead.
*/
dst = mod->arch.ftrace_trampoline;
trampoline = get_plt_entry(addr, dst);
if (memcmp(dst, &trampoline, sizeof(trampoline))) {
if (plt_entry_is_initialized(dst)) {
pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
return -EINVAL;
}

/* point the trampoline to our ftrace entry point */
module_disable_ro(mod);
*dst = trampoline;
module_enable_ro(mod, true);

/*
* Ensure updated trampoline is visible to instruction
* fetch before we patch in the branch. Although the
* architecture doesn't require an IPI in this case,
* Neoverse-N1 erratum #1542419 does require one
* if the TLB maintenance in module_enable_ro() is
* skipped due to rodata_enabled. It doesn't seem worth
* it to make it conditional given that this is
* certainly not a fast-path.
*/
flush_icache_range((unsigned long)&dst[0],
(unsigned long)&dst[1]);
}
addr = (unsigned long)dst;
addr = (unsigned long)mod->arch.ftrace_trampoline;
#else /* CONFIG_ARM64_MODULE_PLTS */
return -EINVAL;
#endif /* CONFIG_ARM64_MODULE_PLTS */
Expand Down
32 changes: 21 additions & 11 deletions arch/arm64/kernel/module.c
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@

#include <linux/bitops.h>
#include <linux/elf.h>
#include <linux/ftrace.h>
#include <linux/gfp.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
Expand Down Expand Up @@ -485,24 +486,33 @@ static const Elf_Shdr *find_section(const Elf_Ehdr *hdr,
return NULL;
}

static int module_init_ftrace_plt(const Elf_Ehdr *hdr,
const Elf_Shdr *sechdrs,
struct module *mod)
{
#if defined(CONFIG_ARM64_MODULE_PLTS) && defined(CONFIG_DYNAMIC_FTRACE)
const Elf_Shdr *s;
struct plt_entry *plt;

s = find_section(hdr, sechdrs, ".text.ftrace_trampoline");
if (!s)
return -ENOEXEC;

plt = (void *)s->sh_addr;
*plt = get_plt_entry(FTRACE_ADDR, plt);
mod->arch.ftrace_trampoline = plt;
#endif
return 0;
}

int module_finalize(const Elf_Ehdr *hdr,
const Elf_Shdr *sechdrs,
struct module *me)
{
const Elf_Shdr *s;

s = find_section(hdr, sechdrs, ".altinstructions");
if (s)
apply_alternatives_module((void *)s->sh_addr, s->sh_size);

#ifdef CONFIG_ARM64_MODULE_PLTS
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE)) {
s = find_section(hdr, sechdrs, ".text.ftrace_trampoline");
if (!s)
return -ENOEXEC;
me->arch.ftrace_trampoline = (void *)s->sh_addr;
}
#endif

return 0;
return module_init_ftrace_plt(hdr, sechdrs, me);
}

0 comments on commit f1a54ae

Please sign in to comment.