[coreboot] K8 HT architecture
tsylla at gmail.com
Sat Oct 25 18:25:58 CEST 2008
On Fri, Oct 24, 2008 at 8:24 PM, Carl-Daniel Hailfinger
<c-d.hailfinger.devel.2006 at gmx.net> wrote:
>> AGESA has a default "discovery method" (I think
>> breadth first, lowest link number first) but it has options to
>> over-ride the discovery mechanism to change the order of nodes in a
>> system. All that matters is that the routing tables are correct and
>> consistent for the traffic to get where it needs to and without
> Getting the routing tables right is non-trivial for MP setups,
> especially if we don't know how the hardware is wired. My hope was to be
> able to express the cHT topologies in a way which allows us to derive
> correct routing tables. I'm postponing that goal for now.
Yeah, that is a very complex thing to do. Just spewing values into the
routing table registers is a reasonable way to go, especially at
>> Once that is complete, the processors just show up in PCI as
>> devices 18-1f (or fewer)
> They show up on bus 0 as you wrote. Will/can any devices attached via
> ncHT also show up on bus 0? If we have multiple ncHT links, what decides
> about the bus numbers for each of them?
Yes, they can sometimes, and it is sort of a special case. If you look
in my lspci dump, you will see lots of southbridge devices on bus 0.
If you added another ncHT device, e.g. another HT1000, that
southbridge would have to have its bus number shifted so devices would
not conflict. You could put it at 1, 6, 20, etc. Other nc devices are
the same, we have the ability to add up to 3 ncHT FPGAs to our system,
and when we do so, they appear on busses 20, 21, and 22 (we picked
those and set them in our BIOS). I think I have seen coreboot code
using 40, 80, c0, etc. The NC devices I have seen all have registers
to program their PCI bus number. You might want to look at the HT
spec's information about bus numbering. It describes the reasoning
about SB stuff living on bus 0.
>> If we add or remove processors, nothing beside the 18-1f devices will
>> change (SB Bus numbers, device numbers, etc do not change). When we
>> add another *non* coherent HT device attached to one of the Opterons,
>> it gets a new bus number (we start at 20 with ours, but it is
>> arbitrary). All of the routing associated with HT for both coherent
>> and non-coherent is contained in the mapping registers and routing
>> table registers in all of the Opterons. The mapping registers map
>> mem/io/cfg regions to nodes, and the routing table says how to get to
>> that node. The ncHT devices can have BARs, and take up memory mapped
>> IO just the same as another PCI device.
> If I understand you correctly, it would be easy to have 00:01.0-00:0a.0
> appear as 01:01.0-01:0a.0 (bus 1) while still keeping the 18-1f devices
> on the hardcoded bus 0.
As long as the nc device lets you change the bus number that it sits
on (and it should, though I have only looked at a couple). You might
want to see how these ever-confusing options are used in v2:
HT_CHAIN_UNITID_BASE, HT_CHAIN_END_UNITID_BASE, SB_HT_CHAIN_ON_BUS0,
>> I am a little bit confused by this. What are the exact differences you
>> see between coreboot and factory? The number of Opterons should be
>> that same. The position in config space of a particular socket may
>> change, based on node discovery differences between the BIOSes. Their
>> is no reason for other devices to move because of the HT changes, but
>> they may move just by other differences in coreboot.
> IIRC I saw a board which only had 18-1f on bus 0 and everything else on
> other buses. AFAICS having devices on the same bus as the processor
> devices or not is a topology difference.
Hopefully it is clear now how things can move like that. The Opterons
won't move. It is possible with HT that other devices may exist on
higher bus numbers without a bridge (real or fake) from bus 0. It is
weird, and non-legacy compatible, so it should not happen with NB and
SB devices. There are exceptions, though, when we connect our nc
FPGAs, and put them at bus 20, we have no bridge in config space
connecting them to bus 0. By default, the linux kernel will not find
them (it does a normal PCI scan, looking for bridges, subordinates,
etc). We must advertise the non-contiguous PCI busses in an ACPI table
for Linux and Windows to "see" the higher busses that are not bridged
to bus 0. (there are some other ways to force the linux kernel to find
the devices, but the ACPI method works for all the current OSes we've
tried). The point is that making the PCI busses discontiguous is
"weird", and makes you jump through other hoops to play well with
> Would you mind posting lspci -tvnn for that 5-processor board as well?
> It would help me a lot to understand this issue better.
Yep, when I am at the machine again, I'll send it.
More information about the coreboot