[coreboot] [PATCH] fix 'AMD Fam10 code breaks with gcc 4.5.0'

Scott Duplichan scott at notabs.org
Mon Sep 6 05:32:17 CEST 2010

Resend: one more attempt to get this patch right. The previous
submission included the patch as an attachement. The attachment
contained Windows-style line endings. The attachment is missing
from the mailing list archive: "A non-text attachment was scrubbed".
This time the patch is inline, which hopefully will solve the
eol-style problem.

The attached patch allows the use of gcc 4.5.0 for AMD builds.
The AMD Tilapia BIOS built with gcc 4.5.0 and this patch has
passed testing on the simnow target only. Can someone confirm
that the attached patch allows an AMD family 10h BIOS such as
Tilapia to work on real hardware? At the same time I will try
to get the change tested on real hardware.

Root cause: After function STOP_CAR_AND_CPU disables cache as
ram, the cache as ram stack can no longer be used. Called 
functions must be inlined to avoid stack usage. Also, the
compiler must keep local variables register based and not
allocated them from the stack. With gcc 4.5.0, some functions
declared as inline are not being inlined. This patch forces
these functions to always be inlined by adding the qualifier
__attribute__((always_inline)) to their declaration.

Update: Still no test reports for real hardware are available.
If we cannot get this change tested on real hardware, I suggest
we conditionally compile in only if gcc 4.5.0 or later is used.


Signed-off-by: Scott Duplichan <scott at notabs.org>

Index: src/include/cpu/x86/msr.h
--- src/include/cpu/x86/msr.h	(revision 5777)
+++ src/include/cpu/x86/msr.h	(working copy)
@@ -29,7 +29,7 @@
         msr_t msr;
 } msrinit_t;
-static inline msr_t rdmsr(unsigned index)
+static inline __attribute__((always_inline)) msr_t rdmsr(unsigned index)
 	msr_t result;
 	__asm__ __volatile__ (
@@ -40,7 +40,7 @@
 	return result;
-static inline void wrmsr(unsigned index, msr_t msr)
+static inline __attribute__((always_inline)) void wrmsr(unsigned index, msr_t msr)
 	__asm__ __volatile__ (
Index: src/include/cpu/x86/cache.h
--- src/include/cpu/x86/cache.h	(revision 5777)
+++ src/include/cpu/x86/cache.h	(working copy)
@@ -74,7 +74,7 @@
 	asm volatile("invd" ::: "memory");
-static inline void enable_cache(void)
+static inline __attribute__((always_inline)) void enable_cache(void)
 	unsigned long cr0;
 	cr0 = read_cr0();
@@ -82,7 +82,7 @@
-static inline void disable_cache(void)
+static inline __attribute__((always_inline)) void disable_cache(void)
 	/* Disable and write back the cache */
 	unsigned long cr0;

More information about the coreboot mailing list