xref: /illumos-gate/usr/src/uts/intel/os/cpuid.c (revision a57aa66e6e494b92c2711a1581224e69b7b40419)
1ab5bb018SKeith M Wesolowski /*
2ab5bb018SKeith M Wesolowski  * CDDL HEADER START
3ab5bb018SKeith M Wesolowski  *
4ab5bb018SKeith M Wesolowski  * The contents of this file are subject to the terms of the
5ab5bb018SKeith M Wesolowski  * Common Development and Distribution License (the "License").
6ab5bb018SKeith M Wesolowski  * You may not use this file except in compliance with the License.
7ab5bb018SKeith M Wesolowski  *
8ab5bb018SKeith M Wesolowski  * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9ab5bb018SKeith M Wesolowski  * or http://www.opensolaris.org/os/licensing.
10ab5bb018SKeith M Wesolowski  * See the License for the specific language governing permissions
11ab5bb018SKeith M Wesolowski  * and limitations under the License.
12ab5bb018SKeith M Wesolowski  *
13ab5bb018SKeith M Wesolowski  * When distributing Covered Code, include this CDDL HEADER in each
14ab5bb018SKeith M Wesolowski  * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15ab5bb018SKeith M Wesolowski  * If applicable, add the following below this CDDL HEADER, with the
16ab5bb018SKeith M Wesolowski  * fields enclosed by brackets "[]" replaced with your own identifying
17ab5bb018SKeith M Wesolowski  * information: Portions Copyright [yyyy] [name of copyright owner]
18ab5bb018SKeith M Wesolowski  *
19ab5bb018SKeith M Wesolowski  * CDDL HEADER END
20ab5bb018SKeith M Wesolowski  */
21ab5bb018SKeith M Wesolowski /*
22ab5bb018SKeith M Wesolowski  * Copyright (c) 2004, 2010, Oracle and/or its affiliates. All rights reserved.
23ab5bb018SKeith M Wesolowski  * Copyright (c) 2011, 2016 by Delphix. All rights reserved.
24ab5bb018SKeith M Wesolowski  * Copyright 2013 Nexenta Systems, Inc. All rights reserved.
25ab5bb018SKeith M Wesolowski  * Copyright 2014 Josef "Jeff" Sipek <jeffpc@josefsipek.net>
26ab5bb018SKeith M Wesolowski  * Copyright 2020 Joyent, Inc.
27*a57aa66eSRobert Mustacchi  * Copyright 2025 Oxide Computer Company
285cd084edSDan McDonald  * Copyright 2024 MNX Cloud, Inc.
29ab5bb018SKeith M Wesolowski  */
30ab5bb018SKeith M Wesolowski /*
31ab5bb018SKeith M Wesolowski  * Copyright (c) 2010, Intel Corporation.
32ab5bb018SKeith M Wesolowski  * All rights reserved.
33ab5bb018SKeith M Wesolowski  */
34ab5bb018SKeith M Wesolowski /*
35ab5bb018SKeith M Wesolowski  * Portions Copyright 2009 Advanced Micro Devices, Inc.
36ab5bb018SKeith M Wesolowski  */
37ab5bb018SKeith M Wesolowski 
38ab5bb018SKeith M Wesolowski /*
39ab5bb018SKeith M Wesolowski  * CPU Identification logic
40ab5bb018SKeith M Wesolowski  *
41ab5bb018SKeith M Wesolowski  * The purpose of this file and its companion, cpuid_subr.c, is to help deal
42ab5bb018SKeith M Wesolowski  * with the identification of CPUs, their features, and their topologies. More
43ab5bb018SKeith M Wesolowski  * specifically, this file helps drive the following:
44ab5bb018SKeith M Wesolowski  *
45ab5bb018SKeith M Wesolowski  * 1. Enumeration of features of the processor which are used by the kernel to
46ab5bb018SKeith M Wesolowski  *    determine what features to enable or disable. These may be instruction set
47ab5bb018SKeith M Wesolowski  *    enhancements or features that we use.
48ab5bb018SKeith M Wesolowski  *
49ab5bb018SKeith M Wesolowski  * 2. Enumeration of instruction set architecture (ISA) additions that userland
50ab5bb018SKeith M Wesolowski  *    will be told about through the auxiliary vector.
51ab5bb018SKeith M Wesolowski  *
52ab5bb018SKeith M Wesolowski  * 3. Understanding the physical topology of the CPU such as the number of
53ab5bb018SKeith M Wesolowski  *    caches, how many cores it has, whether or not it supports symmetric
54ab5bb018SKeith M Wesolowski  *    multi-processing (SMT), etc.
55ab5bb018SKeith M Wesolowski  *
56ab5bb018SKeith M Wesolowski  * ------------------------
57ab5bb018SKeith M Wesolowski  * CPUID History and Basics
58ab5bb018SKeith M Wesolowski  * ------------------------
59ab5bb018SKeith M Wesolowski  *
60ab5bb018SKeith M Wesolowski  * The cpuid instruction was added by Intel roughly around the time that the
61ab5bb018SKeith M Wesolowski  * original Pentium was introduced. The purpose of cpuid was to tell in a
62ab5bb018SKeith M Wesolowski  * programmatic fashion information about the CPU that previously was guessed
63ab5bb018SKeith M Wesolowski  * at. For example, an important part of cpuid is that we can know what
64ab5bb018SKeith M Wesolowski  * extensions to the ISA exist. If you use an invalid opcode you would get a
65ab5bb018SKeith M Wesolowski  * #UD, so this method allows a program (whether a user program or the kernel)
66ab5bb018SKeith M Wesolowski  * to determine what exists without crashing or getting a SIGILL. Of course,
67ab5bb018SKeith M Wesolowski  * this was also during the era of the clones and the AMD Am5x86. The vendor
68ab5bb018SKeith M Wesolowski  * name shows up first in cpuid for a reason.
69ab5bb018SKeith M Wesolowski  *
70ab5bb018SKeith M Wesolowski  * cpuid information is broken down into ranges called a 'leaf'. Each leaf puts
71ab5bb018SKeith M Wesolowski  * unique values into the registers %eax, %ebx, %ecx, and %edx and each leaf has
72ab5bb018SKeith M Wesolowski  * its own meaning. The different leaves are broken down into different regions:
73ab5bb018SKeith M Wesolowski  *
74ab5bb018SKeith M Wesolowski  *	[ 0, 7fffffff ]			This region is called the 'basic'
75ab5bb018SKeith M Wesolowski  *					region. This region is generally defined
76ab5bb018SKeith M Wesolowski  *					by Intel, though some of the original
77ab5bb018SKeith M Wesolowski  *					portions have different meanings based
78ab5bb018SKeith M Wesolowski  *					on the manufacturer. These days, Intel
79ab5bb018SKeith M Wesolowski  *					adds most new features to this region.
80ab5bb018SKeith M Wesolowski  *					AMD adds non-Intel compatible
81ab5bb018SKeith M Wesolowski  *					information in the third, extended
82ab5bb018SKeith M Wesolowski  *					region. Intel uses this for everything
83ab5bb018SKeith M Wesolowski  *					including ISA extensions, CPU
84ab5bb018SKeith M Wesolowski  *					features, cache information, topology,
85ab5bb018SKeith M Wesolowski  *					and more.
86ab5bb018SKeith M Wesolowski  *
87ab5bb018SKeith M Wesolowski  *					There is a hole carved out of this
88ab5bb018SKeith M Wesolowski  *					region which is reserved for
89ab5bb018SKeith M Wesolowski  *					hypervisors.
90ab5bb018SKeith M Wesolowski  *
91ab5bb018SKeith M Wesolowski  *	[ 40000000, 4fffffff ]		This region, which is found in the
92ab5bb018SKeith M Wesolowski  *					middle of the previous region, is
93ab5bb018SKeith M Wesolowski  *					explicitly promised to never be used by
94ab5bb018SKeith M Wesolowski  *					CPUs. Instead, it is used by hypervisors
95ab5bb018SKeith M Wesolowski  *					to communicate information about
96ab5bb018SKeith M Wesolowski  *					themselves to the operating system. The
97ab5bb018SKeith M Wesolowski  *					values and details are unique for each
98ab5bb018SKeith M Wesolowski  *					hypervisor.
99ab5bb018SKeith M Wesolowski  *
100ab5bb018SKeith M Wesolowski  *	[ 80000000, ffffffff ]		This region is called the 'extended'
101ab5bb018SKeith M Wesolowski  *					region. Some of the low leaves mirror
102ab5bb018SKeith M Wesolowski  *					parts of the basic leaves. This region
103ab5bb018SKeith M Wesolowski  *					has generally been used by AMD for
104ab5bb018SKeith M Wesolowski  *					various extensions. For example, AMD-
105ab5bb018SKeith M Wesolowski  *					specific information about caches,
106ab5bb018SKeith M Wesolowski  *					features, and topology are found in this
107ab5bb018SKeith M Wesolowski  *					region.
108ab5bb018SKeith M Wesolowski  *
109ab5bb018SKeith M Wesolowski  * To specify a range, you place the desired leaf into %eax, zero %ebx, %ecx,
110ab5bb018SKeith M Wesolowski  * and %edx, and then issue the cpuid instruction. At the first leaf in each of
111ab5bb018SKeith M Wesolowski  * the ranges, one of the primary things returned is the maximum valid leaf in
112ab5bb018SKeith M Wesolowski  * that range. This allows for discovery of what range of CPUID is valid.
113ab5bb018SKeith M Wesolowski  *
114ab5bb018SKeith M Wesolowski  * The CPUs have potentially surprising behavior when using an invalid leaf or
115ab5bb018SKeith M Wesolowski  * unimplemented leaf. If the requested leaf is within the valid basic or
116ab5bb018SKeith M Wesolowski  * extended range, but is unimplemented, then %eax, %ebx, %ecx, and %edx will be
117ab5bb018SKeith M Wesolowski  * set to zero. However, if you specify a leaf that is outside of a valid range,
118ab5bb018SKeith M Wesolowski  * then instead it will be filled with the last valid _basic_ leaf. For example,
119ab5bb018SKeith M Wesolowski  * if the maximum basic value is on leaf 0x3, then issuing a cpuid for leaf 4 or
120ab5bb018SKeith M Wesolowski  * an invalid extended leaf will return the information for leaf 3.
121ab5bb018SKeith M Wesolowski  *
122ab5bb018SKeith M Wesolowski  * Some leaves are broken down into sub-leaves. This means that the value
123ab5bb018SKeith M Wesolowski  * depends on both the leaf asked for in %eax and a secondary register. For
124ab5bb018SKeith M Wesolowski  * example, Intel uses the value in %ecx on leaf 7 to indicate a sub-leaf to get
125ab5bb018SKeith M Wesolowski  * additional information. Or when getting topology information in leaf 0xb, the
126ab5bb018SKeith M Wesolowski  * initial value in %ecx changes which level of the topology that you are
127ab5bb018SKeith M Wesolowski  * getting information about.
128ab5bb018SKeith M Wesolowski  *
129ab5bb018SKeith M Wesolowski  * cpuid values are always kept to 32 bits regardless of whether or not the
130ab5bb018SKeith M Wesolowski  * program is in 64-bit mode. When executing in 64-bit mode, the upper
131ab5bb018SKeith M Wesolowski  * 32 bits of the register are always set to zero so that way the values are the
132ab5bb018SKeith M Wesolowski  * same regardless of execution mode.
133ab5bb018SKeith M Wesolowski  *
134ab5bb018SKeith M Wesolowski  * ----------------------
135ab5bb018SKeith M Wesolowski  * Identifying Processors
136ab5bb018SKeith M Wesolowski  * ----------------------
137ab5bb018SKeith M Wesolowski  *
138ab5bb018SKeith M Wesolowski  * We can identify a processor in two steps. The first step looks at cpuid leaf
139ab5bb018SKeith M Wesolowski  * 0. Leaf 0 contains the processor's vendor information. This is done by
140ab5bb018SKeith M Wesolowski  * putting a 12 character string in %ebx, %ecx, and %edx. On AMD, it is
141ab5bb018SKeith M Wesolowski  * 'AuthenticAMD' and on Intel it is 'GenuineIntel'.
142ab5bb018SKeith M Wesolowski  *
143ab5bb018SKeith M Wesolowski  * From there, a processor is identified by a combination of three different
144ab5bb018SKeith M Wesolowski  * values:
145ab5bb018SKeith M Wesolowski  *
146ab5bb018SKeith M Wesolowski  *  1. Family
147ab5bb018SKeith M Wesolowski  *  2. Model
148ab5bb018SKeith M Wesolowski  *  3. Stepping
149ab5bb018SKeith M Wesolowski  *
150ab5bb018SKeith M Wesolowski  * Each vendor uses the family and model to uniquely identify a processor. The
151ab5bb018SKeith M Wesolowski  * way that family and model are changed depends on the vendor. For example,
152ab5bb018SKeith M Wesolowski  * Intel has been using family 0x6 for almost all of their processor since the
153ab5bb018SKeith M Wesolowski  * Pentium Pro/Pentium II era, often called the P6. The model is used to
154ab5bb018SKeith M Wesolowski  * identify the exact processor. Different models are often used for the client
155ab5bb018SKeith M Wesolowski  * (consumer) and server parts. Even though each processor often has major
156ab5bb018SKeith M Wesolowski  * architectural differences, they still are considered the same family by
157ab5bb018SKeith M Wesolowski  * Intel.
158ab5bb018SKeith M Wesolowski  *
159ab5bb018SKeith M Wesolowski  * On the other hand, each major AMD architecture generally has its own family.
160ab5bb018SKeith M Wesolowski  * For example, the K8 is family 0x10, Bulldozer 0x15, and Zen 0x17. Within it
16122e4c3acSKeith M Wesolowski  * the model number is used to help identify specific processors.  As AMD's
16222e4c3acSKeith M Wesolowski  * product lines have expanded, they have started putting a mixed bag of
16322e4c3acSKeith M Wesolowski  * processors into the same family, with each processor under a single
16422e4c3acSKeith M Wesolowski  * identifying banner (e.g., Milan, Cezanne) using a range of model numbers.  We
16522e4c3acSKeith M Wesolowski  * refer to each such collection as a processor family, distinct from cpuid
16622e4c3acSKeith M Wesolowski  * family.  Importantly, each processor family has a BIOS and Kernel Developer's
16722e4c3acSKeith M Wesolowski  * Guide (BKDG, older parts) or Processor Programming Reference (PPR) that
16822e4c3acSKeith M Wesolowski  * defines the processor family's non-architectural features.  In general, we'll
16922e4c3acSKeith M Wesolowski  * use "family" here to mean the family number reported by the cpuid instruction
17022e4c3acSKeith M Wesolowski  * and distinguish the processor family from it where appropriate.
171ab5bb018SKeith M Wesolowski  *
172ab5bb018SKeith M Wesolowski  * The stepping is used to refer to a revision of a specific microprocessor. The
173ab5bb018SKeith M Wesolowski  * term comes from equipment used to produce masks that are used to create
174ab5bb018SKeith M Wesolowski  * integrated circuits.
175ab5bb018SKeith M Wesolowski  *
176ab5bb018SKeith M Wesolowski  * The information is present in leaf 1, %eax. In technical documentation you
177ab5bb018SKeith M Wesolowski  * will see the terms extended model and extended family. The original family,
178ab5bb018SKeith M Wesolowski  * model, and stepping fields were each 4 bits wide. If the values in either
179ab5bb018SKeith M Wesolowski  * are 0xf, then one is to consult the extended model and extended family, which
180ab5bb018SKeith M Wesolowski  * take previously reserved bits and allow for a larger number of models and add
181ab5bb018SKeith M Wesolowski  * 0xf to them.
182ab5bb018SKeith M Wesolowski  *
183ab5bb018SKeith M Wesolowski  * When we process this information, we store the full family, model, and
184ab5bb018SKeith M Wesolowski  * stepping in the struct cpuid_info members cpi_family, cpi_model, and
185ab5bb018SKeith M Wesolowski  * cpi_step, respectively. Whenever you are performing comparisons with the
186ab5bb018SKeith M Wesolowski  * family, model, and stepping, you should use these members and not the raw
187ab5bb018SKeith M Wesolowski  * values from cpuid. If you must use the raw values from cpuid directly, you
188ab5bb018SKeith M Wesolowski  * must make sure that you add the extended model and family to the base model
189ab5bb018SKeith M Wesolowski  * and family.
190ab5bb018SKeith M Wesolowski  *
191ab5bb018SKeith M Wesolowski  * In general, we do not use information about the family, model, and stepping
192ab5bb018SKeith M Wesolowski  * to determine whether or not a feature is present; that is generally driven by
193ab5bb018SKeith M Wesolowski  * specific leaves. However, when something we care about on the processor is
194ab5bb018SKeith M Wesolowski  * not considered 'architectural' meaning that it is specific to a set of
195ab5bb018SKeith M Wesolowski  * processors and not promised in the architecture model to be consistent from
196ab5bb018SKeith M Wesolowski  * generation to generation, then we will fall back on this information. The
197ab5bb018SKeith M Wesolowski  * most common cases where this comes up is when we have to workaround errata in
198ab5bb018SKeith M Wesolowski  * the processor, are dealing with processor-specific features such as CPU
199ab5bb018SKeith M Wesolowski  * performance counters, or we want to provide additional information for things
200ab5bb018SKeith M Wesolowski  * such as fault management.
201ab5bb018SKeith M Wesolowski  *
202ab5bb018SKeith M Wesolowski  * While processors also do have a brand string, which is the name that people
203ab5bb018SKeith M Wesolowski  * are familiar with when buying the processor, they are not meant for
204ab5bb018SKeith M Wesolowski  * programmatic consumption. That is what the family, model, and stepping are
205ab5bb018SKeith M Wesolowski  * for.
206ab5bb018SKeith M Wesolowski  *
20722e4c3acSKeith M Wesolowski  * We use the x86_chiprev_t to encode a combination of vendor, processor family,
20822e4c3acSKeith M Wesolowski  * and stepping(s) that refer to a single or very closely related set of silicon
20922e4c3acSKeith M Wesolowski  * implementations; while there are sometimes more specific ways to learn of the
21022e4c3acSKeith M Wesolowski  * presence or absence of a particular erratum or workaround, one may generally
21122e4c3acSKeith M Wesolowski  * assume that all processors of the same chiprev have the same errata and we
21222e4c3acSKeith M Wesolowski  * have chosen to represent them this way precisely because that is how AMD
21322e4c3acSKeith M Wesolowski  * groups them in their revision guides (errata documentation).  The processor
21422e4c3acSKeith M Wesolowski  * family (x86_processor_family_t) may be extracted from the chiprev if that
21522e4c3acSKeith M Wesolowski  * level of detail is not needed.  Processor families are considered unordered
21622e4c3acSKeith M Wesolowski  * but revisions within a family may be compared for either an exact match or at
21722e4c3acSKeith M Wesolowski  * least as recent as a reference revision.  See the chiprev_xxx() functions
21822e4c3acSKeith M Wesolowski  * below.
21922e4c3acSKeith M Wesolowski  *
22022e4c3acSKeith M Wesolowski  * Similarly, each processor family implements a particular microarchitecture,
22122e4c3acSKeith M Wesolowski  * which itself may have multiple revisions.  In general, non-architectural
22222e4c3acSKeith M Wesolowski  * features are specific to a processor family, but some may exist across
22322e4c3acSKeith M Wesolowski  * families containing cores that implement the same microarchitectural revision
22422e4c3acSKeith M Wesolowski  * (and, such cores share common bugs, too).  We provide utility routines
22522e4c3acSKeith M Wesolowski  * analogous to those for extracting and comparing chiprevs for
22622e4c3acSKeith M Wesolowski  * microarchitectures as well; see the uarch_xxx() functions.
22722e4c3acSKeith M Wesolowski  *
22822e4c3acSKeith M Wesolowski  * Both chiprevs and uarchrevs are defined in x86_archext.h and both are at
22922e4c3acSKeith M Wesolowski  * present used and available only for AMD and AMD-like processors.
23022e4c3acSKeith M Wesolowski  *
231ab5bb018SKeith M Wesolowski  * ------------
232ab5bb018SKeith M Wesolowski  * CPUID Passes
233ab5bb018SKeith M Wesolowski  * ------------
234ab5bb018SKeith M Wesolowski  *
235ab5bb018SKeith M Wesolowski  * As part of performing feature detection, we break this into several different
236ab5bb018SKeith M Wesolowski  * passes. There used to be a pass 0 that was done from assembly in locore.s to
237ab5bb018SKeith M Wesolowski  * support processors that have a missing or broken cpuid instruction (notably
238ab5bb018SKeith M Wesolowski  * certain Cyrix processors) but those were all 32-bit processors which are no
239ab5bb018SKeith M Wesolowski  * longer supported. Passes are no longer numbered explicitly to make it easier
240ab5bb018SKeith M Wesolowski  * to break them up or move them around as needed; however, they still have a
241ab5bb018SKeith M Wesolowski  * well-defined execution ordering enforced by the definition of cpuid_pass_t in
242ab5bb018SKeith M Wesolowski  * x86_archext.h. The external interface to execute a cpuid pass or determine
243ab5bb018SKeith M Wesolowski  * whether a pass has been completed consists of cpuid_execpass() and
244ab5bb018SKeith M Wesolowski  * cpuid_checkpass() respectively.  The passes now, in that execution order,
245ab5bb018SKeith M Wesolowski  * are as follows:
246ab5bb018SKeith M Wesolowski  *
247ab5bb018SKeith M Wesolowski  *	PRELUDE		This pass does not have any dependencies on system
248ab5bb018SKeith M Wesolowski  *			setup; in particular, unlike all subsequent passes it is
249ab5bb018SKeith M Wesolowski  *			guaranteed not to require PCI config space access.  It
250ab5bb018SKeith M Wesolowski  *			sets the flag indicating that the processor we are
251ab5bb018SKeith M Wesolowski  *			running on supports the cpuid instruction, which all
252ab5bb018SKeith M Wesolowski  *			64-bit processors do.  This would also be the place to
253ab5bb018SKeith M Wesolowski  *			add any other basic state that is required later on and
254ab5bb018SKeith M Wesolowski  *			can be learned without dependencies.
255ab5bb018SKeith M Wesolowski  *
256ab5bb018SKeith M Wesolowski  *	IDENT		Determine which vendor manufactured the CPU, the family,
257ab5bb018SKeith M Wesolowski  *			model, and stepping information, and compute basic
258ab5bb018SKeith M Wesolowski  *			identifying tags from those values.  This is done first
259ab5bb018SKeith M Wesolowski  *			so that machine-dependent code can control the features
260ab5bb018SKeith M Wesolowski  *			the cpuid instruction will report during subsequent
261ab5bb018SKeith M Wesolowski  *			passes if needed, and so that any intervening
262ab5bb018SKeith M Wesolowski  *			machine-dependent code that needs basic identity will
26322e4c3acSKeith M Wesolowski  *			have it available.  This includes synthesised
26422e4c3acSKeith M Wesolowski  *			identifiers such as chiprev and uarchrev as well as the
26522e4c3acSKeith M Wesolowski  *			values obtained directly from cpuid.  Prior to executing
26622e4c3acSKeith M Wesolowski  *			this pass, machine-depedent boot code is responsible for
26722e4c3acSKeith M Wesolowski  *			ensuring that the PCI configuration space access
26822e4c3acSKeith M Wesolowski  *			functions have been set up and, if necessary, that
26922e4c3acSKeith M Wesolowski  *			determine_platform() has been called.
270ab5bb018SKeith M Wesolowski  *
271ab5bb018SKeith M Wesolowski  *	BASIC		This is the primary pass and is responsible for doing a
272ab5bb018SKeith M Wesolowski  *			large number of different things:
273ab5bb018SKeith M Wesolowski  *
274ab5bb018SKeith M Wesolowski  *			1. Gathering a large number of feature flags to
275ab5bb018SKeith M Wesolowski  *			determine which features the CPU support and which
276ab5bb018SKeith M Wesolowski  *			indicate things that we need to do other work in the OS
277ab5bb018SKeith M Wesolowski  *			to enable. Features detected this way are added to the
278ab5bb018SKeith M Wesolowski  *			x86_featureset which can be queried to
279ab5bb018SKeith M Wesolowski  *			determine what we should do. This includes processing
280ab5bb018SKeith M Wesolowski  *			all of the basic and extended CPU features that we care
281ab5bb018SKeith M Wesolowski  *			about.
282ab5bb018SKeith M Wesolowski  *
283ab5bb018SKeith M Wesolowski  *			2. Determining the CPU's topology. This includes
284ab5bb018SKeith M Wesolowski  *			information about how many cores and threads are present
285ab5bb018SKeith M Wesolowski  *			in the package. It also is responsible for figuring out
286ab5bb018SKeith M Wesolowski  *			which logical CPUs are potentially part of the same core
287ab5bb018SKeith M Wesolowski  *			and what other resources they might share. For more
288ab5bb018SKeith M Wesolowski  *			information see the 'Topology' section.
289ab5bb018SKeith M Wesolowski  *
290ab5bb018SKeith M Wesolowski  *			3. Determining the set of CPU security-specific features
291ab5bb018SKeith M Wesolowski  *			that we need to worry about and determine the
292ab5bb018SKeith M Wesolowski  *			appropriate set of workarounds.
293ab5bb018SKeith M Wesolowski  *
294ab5bb018SKeith M Wesolowski  *			Pass 1 on the boot CPU occurs before KMDB is started.
295ab5bb018SKeith M Wesolowski  *
296ab5bb018SKeith M Wesolowski  *	EXTENDED	The second pass is done after startup(). Here, we check
297ab5bb018SKeith M Wesolowski  *			other miscellaneous features. Most of this is gathering
298ab5bb018SKeith M Wesolowski  *			additional basic and extended features that we'll use in
299ab5bb018SKeith M Wesolowski  *			later passes or for debugging support.
300ab5bb018SKeith M Wesolowski  *
301ab5bb018SKeith M Wesolowski  *	DYNAMIC		The third pass occurs after the kernel memory allocator
302ab5bb018SKeith M Wesolowski  *			has been fully initialized. This gathers information
303ab5bb018SKeith M Wesolowski  *			where we might need dynamic memory available for our
304ab5bb018SKeith M Wesolowski  *			uses. This includes several varying width leaves that
305ab5bb018SKeith M Wesolowski  *			have cache information and the processor's brand string.
306ab5bb018SKeith M Wesolowski  *
307ab5bb018SKeith M Wesolowski  *	RESOLVE		The fourth and final normal pass is performed after the
308ab5bb018SKeith M Wesolowski  *			kernel has brought most everything online. This is
309ab5bb018SKeith M Wesolowski  *			invoked from post_startup(). In this pass, we go through
310ab5bb018SKeith M Wesolowski  *			the set of features that we have enabled and turn that
311ab5bb018SKeith M Wesolowski  *			into the hardware auxiliary vector features that
312ab5bb018SKeith M Wesolowski  *			userland receives. This is used by userland, primarily
313ab5bb018SKeith M Wesolowski  *			by the run-time link-editor (RTLD), though userland
314ab5bb018SKeith M Wesolowski  *			software could also refer to it directly.
315ab5bb018SKeith M Wesolowski  *
316ab5bb018SKeith M Wesolowski  * The function that performs a pass is currently assumed to be infallible, and
317ab5bb018SKeith M Wesolowski  * all existing implementation are.  This simplifies callers by allowing
318ab5bb018SKeith M Wesolowski  * cpuid_execpass() to return void. Similarly, implementers do not need to check
319ab5bb018SKeith M Wesolowski  * for a NULL CPU argument; the current CPU's cpu_t is substituted if necessary.
320ab5bb018SKeith M Wesolowski  * Both of these assumptions can be relaxed if needed by future developments.
321ab5bb018SKeith M Wesolowski  * Tracking of completed states is handled by cpuid_execpass(). It is programmer
322ab5bb018SKeith M Wesolowski  * error to attempt to execute a pass before all previous passes have been
323ab5bb018SKeith M Wesolowski  * completed on the specified CPU, or to request cpuid information before the
324ab5bb018SKeith M Wesolowski  * pass that captures it has been executed.  These conditions can be tested
325ab5bb018SKeith M Wesolowski  * using cpuid_checkpass().
326ab5bb018SKeith M Wesolowski  *
327df5a0c57SLuqman Aden  * ---------
328df5a0c57SLuqman Aden  * Microcode
329df5a0c57SLuqman Aden  * ---------
330ab5bb018SKeith M Wesolowski  *
331df5a0c57SLuqman Aden  * Microcode updates may be applied by the firmware (BIOS/UEFI) and/or by the
332df5a0c57SLuqman Aden  * operating system and may result in architecturally visible changes (e.g.,
333df5a0c57SLuqman Aden  * changed MSR or CPUID bits). As such, we want to apply any updates as early
334df5a0c57SLuqman Aden  * as possible during the boot process -- right after the IDENT pass.
335df5a0c57SLuqman Aden  *
336df5a0c57SLuqman Aden  * Microcode may also be updated at runtime via ucodeadm(8), after which we do
337df5a0c57SLuqman Aden  * a selective rescan of the cpuid leaves to determine what features have
338df5a0c57SLuqman Aden  * changed. Microcode updates can provide more details about security related
339df5a0c57SLuqman Aden  * features to deal with issues like Spectre and L1TF. On occasion, vendors have
340df5a0c57SLuqman Aden  * violated their contract and removed bits. However, we don't try to detect
341df5a0c57SLuqman Aden  * that because that puts us in a situation that we really can't deal with. As
342df5a0c57SLuqman Aden  * such, the only thing we rescan are security related features today. See
343df5a0c57SLuqman Aden  * cpuid_pass_ucode(). This is not a pass in the same sense as the others and
344df5a0c57SLuqman Aden  * is run on demand, via cpuid_post_ucodeadm().
345df5a0c57SLuqman Aden  *
346ab5bb018SKeith M Wesolowski  *
347ab5bb018SKeith M Wesolowski  * All of the passes are run on all CPUs. However, for the most part we only
348ab5bb018SKeith M Wesolowski  * care about what the boot CPU says about this information and use the other
349ab5bb018SKeith M Wesolowski  * CPUs as a rough guide to sanity check that we have the same feature set.
350ab5bb018SKeith M Wesolowski  *
351ab5bb018SKeith M Wesolowski  * We do not support running multiple logical CPUs with disjoint, let alone
352ab5bb018SKeith M Wesolowski  * different, feature sets.
353ab5bb018SKeith M Wesolowski  *
354ab5bb018SKeith M Wesolowski  * ------------------
355ab5bb018SKeith M Wesolowski  * Processor Topology
356ab5bb018SKeith M Wesolowski  * ------------------
357ab5bb018SKeith M Wesolowski  *
358ab5bb018SKeith M Wesolowski  * One of the important things that we need to do is to understand the topology
359ab5bb018SKeith M Wesolowski  * of the underlying processor. When we say topology in this case, we're trying
360ab5bb018SKeith M Wesolowski  * to understand the relationship between the logical CPUs that the operating
361ab5bb018SKeith M Wesolowski  * system sees and the underlying physical layout. Different logical CPUs may
362ab5bb018SKeith M Wesolowski  * share different resources which can have important consequences for the
363ab5bb018SKeith M Wesolowski  * performance of the system. For example, they may share caches, execution
364ab5bb018SKeith M Wesolowski  * units, and more.
365ab5bb018SKeith M Wesolowski  *
366ab5bb018SKeith M Wesolowski  * The topology of the processor changes from generation to generation and
367ab5bb018SKeith M Wesolowski  * vendor to vendor.  Along with that, different vendors use different
368ab5bb018SKeith M Wesolowski  * terminology, and the operating system itself uses occasionally overlapping
369ab5bb018SKeith M Wesolowski  * terminology. It's important to understand what this topology looks like so
370ab5bb018SKeith M Wesolowski  * one can understand the different things that we try to calculate and
371ab5bb018SKeith M Wesolowski  * determine.
372ab5bb018SKeith M Wesolowski  *
373ab5bb018SKeith M Wesolowski  * To get started, let's talk about a little bit of terminology that we've used
374ab5bb018SKeith M Wesolowski  * so far, is used throughout this file, and is fairly generic across multiple
375ab5bb018SKeith M Wesolowski  * vendors:
376ab5bb018SKeith M Wesolowski  *
377ab5bb018SKeith M Wesolowski  * CPU
378ab5bb018SKeith M Wesolowski  *	A central processing unit (CPU) refers to a logical and/or virtual
379ab5bb018SKeith M Wesolowski  *	entity that the operating system can execute instructions on. The
380ab5bb018SKeith M Wesolowski  *	underlying resources for this CPU may be shared between multiple
381ab5bb018SKeith M Wesolowski  *	entities; however, to the operating system it is a discrete unit.
382ab5bb018SKeith M Wesolowski  *
383ab5bb018SKeith M Wesolowski  * PROCESSOR and PACKAGE
384ab5bb018SKeith M Wesolowski  *
385ab5bb018SKeith M Wesolowski  *	Generally, when we use the term 'processor' on its own, we are referring
386ab5bb018SKeith M Wesolowski  *	to the physical entity that one buys and plugs into a board. However,
387ab5bb018SKeith M Wesolowski  *	because processor has been overloaded and one might see it used to mean
388ab5bb018SKeith M Wesolowski  *	multiple different levels, we will instead use the term 'package' for
389ab5bb018SKeith M Wesolowski  *	the rest of this file. The term package comes from the electrical
390ab5bb018SKeith M Wesolowski  *	engineering side and refers to the physical entity that encloses the
391ab5bb018SKeith M Wesolowski  *	electronics inside. Strictly speaking the package can contain more than
392ab5bb018SKeith M Wesolowski  *	just the CPU, for example, on many processors it may also have what's
393ab5bb018SKeith M Wesolowski  *	called an 'integrated graphical processing unit (GPU)'. Because the
394ab5bb018SKeith M Wesolowski  *	package can encapsulate multiple units, it is the largest physical unit
395ab5bb018SKeith M Wesolowski  *	that we refer to.
396ab5bb018SKeith M Wesolowski  *
397ab5bb018SKeith M Wesolowski  * SOCKET
398ab5bb018SKeith M Wesolowski  *
399ab5bb018SKeith M Wesolowski  *	A socket refers to unit on a system board (generally the motherboard)
400ab5bb018SKeith M Wesolowski  *	that can receive a package. A single package, or processor, is plugged
401ab5bb018SKeith M Wesolowski  *	into a single socket. A system may have multiple sockets. Often times,
402ab5bb018SKeith M Wesolowski  *	the term socket is used interchangeably with package and refers to the
403ab5bb018SKeith M Wesolowski  *	electrical component that has plugged in, and not the receptacle itself.
404ab5bb018SKeith M Wesolowski  *
405ab5bb018SKeith M Wesolowski  * CORE
406ab5bb018SKeith M Wesolowski  *
407ab5bb018SKeith M Wesolowski  *	A core refers to the physical instantiation of a CPU, generally, with a
408ab5bb018SKeith M Wesolowski  *	full set of hardware resources available to it. A package may contain
409ab5bb018SKeith M Wesolowski  *	multiple cores inside of it or it may just have a single one. A
410ab5bb018SKeith M Wesolowski  *	processor with more than one core is often referred to as 'multi-core'.
411ab5bb018SKeith M Wesolowski  *	In illumos, we will use the feature X86FSET_CMP to refer to a system
412ab5bb018SKeith M Wesolowski  *	that has 'multi-core' processors.
413ab5bb018SKeith M Wesolowski  *
414ab5bb018SKeith M Wesolowski  *	A core may expose a single logical CPU to the operating system, or it
415ab5bb018SKeith M Wesolowski  *	may expose multiple CPUs, which we call threads, defined below.
416ab5bb018SKeith M Wesolowski  *
417ab5bb018SKeith M Wesolowski  *	Some resources may still be shared by cores in the same package. For
418ab5bb018SKeith M Wesolowski  *	example, many processors will share the level 3 cache between cores.
419ab5bb018SKeith M Wesolowski  *	Some AMD generations share hardware resources between cores. For more
420ab5bb018SKeith M Wesolowski  *	information on that see the section 'AMD Topology'.
421ab5bb018SKeith M Wesolowski  *
422ab5bb018SKeith M Wesolowski  * THREAD and STRAND
423ab5bb018SKeith M Wesolowski  *
424ab5bb018SKeith M Wesolowski  *	In this file, generally a thread refers to a hardware resources and not
425ab5bb018SKeith M Wesolowski  *	the operating system's logical abstraction. A thread is always exposed
426ab5bb018SKeith M Wesolowski  *	as an independent logical CPU to the operating system. A thread belongs
427ab5bb018SKeith M Wesolowski  *	to a specific core. A core may have more than one thread. When that is
428ab5bb018SKeith M Wesolowski  *	the case, the threads that are part of the same core are often referred
429ab5bb018SKeith M Wesolowski  *	to as 'siblings'.
430ab5bb018SKeith M Wesolowski  *
431ab5bb018SKeith M Wesolowski  *	When multiple threads exist, this is generally referred to as
432ab5bb018SKeith M Wesolowski  *	simultaneous multi-threading (SMT). When Intel introduced this in their
433ab5bb018SKeith M Wesolowski  *	processors they called it hyper-threading (HT). When multiple threads
434ab5bb018SKeith M Wesolowski  *	are active in a core, they split the resources of the core. For example,
435ab5bb018SKeith M Wesolowski  *	two threads may share the same set of hardware execution units.
436ab5bb018SKeith M Wesolowski  *
437ab5bb018SKeith M Wesolowski  *	The operating system often uses the term 'strand' to refer to a thread.
438ab5bb018SKeith M Wesolowski  *	This helps disambiguate it from the software concept.
439ab5bb018SKeith M Wesolowski  *
440ab5bb018SKeith M Wesolowski  * CHIP
441ab5bb018SKeith M Wesolowski  *
442ab5bb018SKeith M Wesolowski  *	Unfortunately, the term 'chip' is dramatically overloaded. At its most
443ab5bb018SKeith M Wesolowski  *	base meaning, it is used to refer to a single integrated circuit, which
444ab5bb018SKeith M Wesolowski  *	may or may not be the only thing in the package. In illumos, when you
445ab5bb018SKeith M Wesolowski  *	see the term 'chip' it is almost always referring to the same thing as
446ab5bb018SKeith M Wesolowski  *	the 'package'. However, many vendors may use chip to refer to one of
447ab5bb018SKeith M Wesolowski  *	many integrated circuits that have been placed in the package. As an
448ab5bb018SKeith M Wesolowski  *	example, see the subsequent definition.
449ab5bb018SKeith M Wesolowski  *
450ab5bb018SKeith M Wesolowski  *	To try and keep things consistent, we will only use chip when referring
451ab5bb018SKeith M Wesolowski  *	to the entire integrated circuit package, with the exception of the
452ab5bb018SKeith M Wesolowski  *	definition of multi-chip module (because it is in the name) and use the
453ab5bb018SKeith M Wesolowski  *	term 'die' when we want the more general, potential sub-component
454ab5bb018SKeith M Wesolowski  *	definition.
455ab5bb018SKeith M Wesolowski  *
456ab5bb018SKeith M Wesolowski  * DIE
457ab5bb018SKeith M Wesolowski  *
458ab5bb018SKeith M Wesolowski  *	A die refers to an integrated circuit. Inside of the package there may
459ab5bb018SKeith M Wesolowski  *	be a single die or multiple dies. This is sometimes called a 'chip' in
460ab5bb018SKeith M Wesolowski  *	vendor's parlance, but in this file, we use the term die to refer to a
461ab5bb018SKeith M Wesolowski  *	subcomponent.
462ab5bb018SKeith M Wesolowski  *
463ab5bb018SKeith M Wesolowski  * MULTI-CHIP MODULE
464ab5bb018SKeith M Wesolowski  *
465ab5bb018SKeith M Wesolowski  *	A multi-chip module (MCM) refers to putting multiple distinct chips that
466ab5bb018SKeith M Wesolowski  *	are connected together in the same package. When a multi-chip design is
467ab5bb018SKeith M Wesolowski  *	used, generally each chip is manufactured independently and then joined
468ab5bb018SKeith M Wesolowski  *	together in the package. For example, on AMD's Zen microarchitecture
469ab5bb018SKeith M Wesolowski  *	(family 0x17), the package contains several dies (the second meaning of
470ab5bb018SKeith M Wesolowski  *	chip from above) that are connected together.
471ab5bb018SKeith M Wesolowski  *
472ab5bb018SKeith M Wesolowski  * CACHE
473ab5bb018SKeith M Wesolowski  *
474ab5bb018SKeith M Wesolowski  *	A cache is a part of the processor that maintains copies of recently
475ab5bb018SKeith M Wesolowski  *	accessed memory. Caches are split into levels and then into types.
476ab5bb018SKeith M Wesolowski  *	Commonly there are one to three levels, called level one, two, and
477ab5bb018SKeith M Wesolowski  *	three. The lower the level, the smaller it is, the closer it is to the
478ab5bb018SKeith M Wesolowski  *	execution units of the CPU, and the faster it is to access. The layout
479ab5bb018SKeith M Wesolowski  *	and design of the cache come in many different flavors, consult other
480ab5bb018SKeith M Wesolowski  *	resources for a discussion of those.
481ab5bb018SKeith M Wesolowski  *
482ab5bb018SKeith M Wesolowski  *	Caches are generally split into two types, the instruction and data
483ab5bb018SKeith M Wesolowski  *	cache. The caches contain what their names suggest, the instruction
484ab5bb018SKeith M Wesolowski  *	cache has executable program text, while the data cache has all other
485ab5bb018SKeith M Wesolowski  *	memory that the processor accesses. As of this writing, data is kept
486ab5bb018SKeith M Wesolowski  *	coherent between all of the caches on x86, so if one modifies program
487ab5bb018SKeith M Wesolowski  *	text before it is executed, that will be in the data cache, and the
488ab5bb018SKeith M Wesolowski  *	instruction cache will be synchronized with that change when the
489ab5bb018SKeith M Wesolowski  *	processor actually executes those instructions. This coherency also
490ab5bb018SKeith M Wesolowski  *	covers the fact that data could show up in multiple caches.
491ab5bb018SKeith M Wesolowski  *
492ab5bb018SKeith M Wesolowski  *	Generally, the lowest level caches are specific to a core. However, the
493ab5bb018SKeith M Wesolowski  *	last layer cache is shared between some number of cores. The number of
494ab5bb018SKeith M Wesolowski  *	CPUs sharing this last level cache is important. This has implications
495ab5bb018SKeith M Wesolowski  *	for the choices that the scheduler makes, as accessing memory that might
496ab5bb018SKeith M Wesolowski  *	be in a remote cache after thread migration can be quite expensive.
497ab5bb018SKeith M Wesolowski  *
498ab5bb018SKeith M Wesolowski  *	Sometimes, the word cache is abbreviated with a '$', because in US
499ab5bb018SKeith M Wesolowski  *	English the word cache is pronounced the same as cash. So L1D$ refers to
500ab5bb018SKeith M Wesolowski  *	the L1 data cache, and L2$ would be the L2 cache. This will not be used
501ab5bb018SKeith M Wesolowski  *	in the rest of this theory statement for clarity.
502ab5bb018SKeith M Wesolowski  *
503ab5bb018SKeith M Wesolowski  * MEMORY CONTROLLER
504ab5bb018SKeith M Wesolowski  *
505ab5bb018SKeith M Wesolowski  *	The memory controller is a component that provides access to DRAM. Each
506ab5bb018SKeith M Wesolowski  *	memory controller can access a set number of DRAM channels. Each channel
507ab5bb018SKeith M Wesolowski  *	can have a number of DIMMs (sticks of memory) associated with it. A
508ab5bb018SKeith M Wesolowski  *	given package may have more than one memory controller. The association
509ab5bb018SKeith M Wesolowski  *	of the memory controller to a group of cores is important as it is
510ab5bb018SKeith M Wesolowski  *	cheaper to access memory on the controller that you are associated with.
511ab5bb018SKeith M Wesolowski  *
512ab5bb018SKeith M Wesolowski  * NUMA
513ab5bb018SKeith M Wesolowski  *
514ab5bb018SKeith M Wesolowski  *	NUMA or non-uniform memory access, describes a way that systems are
515ab5bb018SKeith M Wesolowski  *	built. On x86, any processor core can address all of the memory in the
516ab5bb018SKeith M Wesolowski  *	system. However, When using multiple sockets or possibly within a
517ab5bb018SKeith M Wesolowski  *	multi-chip module, some of that memory is physically closer and some of
518ab5bb018SKeith M Wesolowski  *	it is further. Memory that is further away is more expensive to access.
519ab5bb018SKeith M Wesolowski  *	Consider the following image of multiple sockets with memory:
520ab5bb018SKeith M Wesolowski  *
521ab5bb018SKeith M Wesolowski  *	+--------+                                                +--------+
522ab5bb018SKeith M Wesolowski  *	| DIMM A |         +----------+      +----------+         | DIMM D |
523ab5bb018SKeith M Wesolowski  *	+--------+-+       |          |      |          |       +-+------+-+
524ab5bb018SKeith M Wesolowski  *	  | DIMM B |=======| Socket 0 |======| Socket 1 |=======| DIMM E |
525ab5bb018SKeith M Wesolowski  *	  +--------+-+     |          |      |          |     +-+------+-+
526ab5bb018SKeith M Wesolowski  *	    | DIMM C |     +----------+      +----------+     | DIMM F |
527ab5bb018SKeith M Wesolowski  *	    +--------+                                        +--------+
528ab5bb018SKeith M Wesolowski  *
529ab5bb018SKeith M Wesolowski  *	In this example, Socket 0 is closer to DIMMs A-C while Socket 1 is
530ab5bb018SKeith M Wesolowski  *	closer to DIMMs D-F. This means that it is cheaper for socket 0 to
531ab5bb018SKeith M Wesolowski  *	access DIMMs A-C and more expensive to access D-F as it has to go
532ab5bb018SKeith M Wesolowski  *	through Socket 1 to get there. The inverse is true for Socket 1. DIMMs
533ab5bb018SKeith M Wesolowski  *	D-F are cheaper than A-C. While the socket form is the most common, when
534ab5bb018SKeith M Wesolowski  *	using multi-chip modules, this can also sometimes occur. For another
535ab5bb018SKeith M Wesolowski  *	example of this that's more involved, see the AMD topology section.
536ab5bb018SKeith M Wesolowski  *
537ab5bb018SKeith M Wesolowski  *
538ab5bb018SKeith M Wesolowski  * Intel Topology
539ab5bb018SKeith M Wesolowski  * --------------
540ab5bb018SKeith M Wesolowski  *
541ab5bb018SKeith M Wesolowski  * Most Intel processors since Nehalem, (as of this writing the current gen
542ab5bb018SKeith M Wesolowski  * is Skylake / Cannon Lake) follow a fairly similar pattern. The CPU portion of
543ab5bb018SKeith M Wesolowski  * the package is a single monolithic die. MCMs currently aren't used. Most
544ab5bb018SKeith M Wesolowski  * parts have three levels of caches, with the L3 cache being shared between
545ab5bb018SKeith M Wesolowski  * all of the cores on the package. The L1/L2 cache is generally specific to
546ab5bb018SKeith M Wesolowski  * an individual core. The following image shows at a simplified level what
547ab5bb018SKeith M Wesolowski  * this looks like. The memory controller is commonly part of something called
548ab5bb018SKeith M Wesolowski  * the 'Uncore', that used to be separate physical chips that were not a part of
549ab5bb018SKeith M Wesolowski  * the package, but are now part of the same chip.
550ab5bb018SKeith M Wesolowski  *
551ab5bb018SKeith M Wesolowski  *  +-----------------------------------------------------------------------+
552ab5bb018SKeith M Wesolowski  *  | Package                                                               |
553ab5bb018SKeith M Wesolowski  *  |  +-------------------+  +-------------------+  +-------------------+  |
554ab5bb018SKeith M Wesolowski  *  |  | Core              |  | Core              |  | Core              |  |
555ab5bb018SKeith M Wesolowski  *  |  |  +--------+ +---+ |  |  +--------+ +---+ |  |  +--------+ +---+ |  |
556ab5bb018SKeith M Wesolowski  *  |  |  | Thread | | L | |  |  | Thread | | L | |  |  | Thread | | L | |  |
557ab5bb018SKeith M Wesolowski  *  |  |  +--------+ | 1 | |  |  +--------+ | 1 | |  |  +--------+ | 1 | |  |
558ab5bb018SKeith M Wesolowski  *  |  |  +--------+ |   | |  |  +--------+ |   | |  |  +--------+ |   | |  |
559ab5bb018SKeith M Wesolowski  *  |  |  | Thread | |   | |  |  | Thread | |   | |  |  | Thread | |   | |  |
560ab5bb018SKeith M Wesolowski  *  |  |  +--------+ +---+ |  |  +--------+ +---+ |  |  +--------+ +---+ |  |
561ab5bb018SKeith M Wesolowski  *  |  |  +--------------+ |  |  +--------------+ |  |  +--------------+ |  |
562ab5bb018SKeith M Wesolowski  *  |  |  | L2 Cache     | |  |  | L2 Cache     | |  |  | L2 Cache     | |  |
563ab5bb018SKeith M Wesolowski  *  |  |  +--------------+ |  |  +--------------+ |  |  +--------------+ |  |
564ab5bb018SKeith M Wesolowski  *  |  +-------------------+  +-------------------+  +-------------------+  |
565ab5bb018SKeith M Wesolowski  *  | +-------------------------------------------------------------------+ |
566ab5bb018SKeith M Wesolowski  *  | |                         Shared L3 Cache                           | |
567ab5bb018SKeith M Wesolowski  *  | +-------------------------------------------------------------------+ |
568ab5bb018SKeith M Wesolowski  *  | +-------------------------------------------------------------------+ |
569ab5bb018SKeith M Wesolowski  *  | |                        Memory Controller                          | |
570ab5bb018SKeith M Wesolowski  *  | +-------------------------------------------------------------------+ |
571ab5bb018SKeith M Wesolowski  *  +-----------------------------------------------------------------------+
572ab5bb018SKeith M Wesolowski  *
573ab5bb018SKeith M Wesolowski  * A side effect of this current architecture is that what we care about from a
574ab5bb018SKeith M Wesolowski  * scheduling and topology perspective, is simplified. In general we care about
575ab5bb018SKeith M Wesolowski  * understanding which logical CPUs are part of the same core and socket.
576ab5bb018SKeith M Wesolowski  *
577ab5bb018SKeith M Wesolowski  * To determine the relationship between threads and cores, Intel initially used
578ab5bb018SKeith M Wesolowski  * the identifier in the advanced programmable interrupt controller (APIC). They
579ab5bb018SKeith M Wesolowski  * also added cpuid leaf 4 to give additional information about the number of
580ab5bb018SKeith M Wesolowski  * threads and CPUs in the processor. With the addition of x2apic (which
581ab5bb018SKeith M Wesolowski  * increased the number of addressable logical CPUs from 8-bits to 32-bits), an
582ab5bb018SKeith M Wesolowski  * additional cpuid topology leaf 0xB was added.
583ab5bb018SKeith M Wesolowski  *
584ab5bb018SKeith M Wesolowski  * AMD Topology
585ab5bb018SKeith M Wesolowski  * ------------
586ab5bb018SKeith M Wesolowski  *
587ab5bb018SKeith M Wesolowski  * When discussing AMD topology, we want to break this into three distinct
588ab5bb018SKeith M Wesolowski  * generations of topology. There's the basic topology that has been used in
589ab5bb018SKeith M Wesolowski  * family 0xf+ (Opteron, Athlon64), there's the topology that was introduced
590ab5bb018SKeith M Wesolowski  * with family 0x15 (Bulldozer), and there's the topology that was introduced
591ab5bb018SKeith M Wesolowski  * with family 0x17 (Zen), evolved more dramatically in Zen 2 (still family
592ab5bb018SKeith M Wesolowski  * 0x17), and tweaked slightly in Zen 3 (family 19h). AMD also has some
593ab5bb018SKeith M Wesolowski  * additional terminology that's worth talking about.
594ab5bb018SKeith M Wesolowski  *
595ab5bb018SKeith M Wesolowski  * Until the introduction of family 0x17 (Zen), AMD did not implement something
596ab5bb018SKeith M Wesolowski  * that they considered SMT. Whether or not the AMD processors have SMT
597ab5bb018SKeith M Wesolowski  * influences many things including scheduling and reliability, availability,
598ab5bb018SKeith M Wesolowski  * and serviceability (RAS) features.
599ab5bb018SKeith M Wesolowski  *
600ab5bb018SKeith M Wesolowski  * NODE
601ab5bb018SKeith M Wesolowski  *
602ab5bb018SKeith M Wesolowski  *	AMD uses the term node to refer to a die that contains a number of cores
603ab5bb018SKeith M Wesolowski  *	and I/O resources. Depending on the processor family and model, more
604ab5bb018SKeith M Wesolowski  *	than one node can be present in the package. When there is more than one
605ab5bb018SKeith M Wesolowski  *	node this indicates a multi-chip module. Usually each node has its own
606ab5bb018SKeith M Wesolowski  *	access to memory and I/O devices. This is important and generally
607ab5bb018SKeith M Wesolowski  *	different from the corresponding Intel Nehalem-Skylake+ processors. As a
608ab5bb018SKeith M Wesolowski  *	result, we track this relationship in the operating system.
609ab5bb018SKeith M Wesolowski  *
610ab5bb018SKeith M Wesolowski  *	In processors with an L3 cache, the L3 cache is generally shared across
611ab5bb018SKeith M Wesolowski  *	the entire node, though the way this is carved up varies from generation
612ab5bb018SKeith M Wesolowski  *	to generation.
613ab5bb018SKeith M Wesolowski  *
614ab5bb018SKeith M Wesolowski  * BULLDOZER
615ab5bb018SKeith M Wesolowski  *
616ab5bb018SKeith M Wesolowski  *	Starting with the Bulldozer family (0x15) and continuing until the
617ab5bb018SKeith M Wesolowski  *	introduction of the Zen microarchitecture, AMD introduced the idea of a
618ab5bb018SKeith M Wesolowski  *	compute unit. In a compute unit, two traditional cores share a number of
619ab5bb018SKeith M Wesolowski  *	hardware resources. Critically, they share the FPU, L1 instruction
620ab5bb018SKeith M Wesolowski  *	cache, and the L2 cache. Several compute units were then combined inside
621ab5bb018SKeith M Wesolowski  *	of a single node.  Because the integer execution units, L1 data cache,
622ab5bb018SKeith M Wesolowski  *	and some other resources were not shared between the cores, AMD never
623ab5bb018SKeith M Wesolowski  *	considered this to be SMT.
624ab5bb018SKeith M Wesolowski  *
625ab5bb018SKeith M Wesolowski  * ZEN
626ab5bb018SKeith M Wesolowski  *
627ab5bb018SKeith M Wesolowski  *	The Zen family (0x17) uses a multi-chip module (MCM) design, the module
628ab5bb018SKeith M Wesolowski  *	is called Zeppelin. These modules are similar to the idea of nodes used
629ab5bb018SKeith M Wesolowski  *	previously. Each of these nodes has two DRAM channels which all of the
630ab5bb018SKeith M Wesolowski  *	cores in the node can access uniformly. These nodes are linked together
631ab5bb018SKeith M Wesolowski  *	in the package, creating a NUMA environment.
632ab5bb018SKeith M Wesolowski  *
633ab5bb018SKeith M Wesolowski  *	The Zeppelin die itself contains two different 'core complexes'. Each
634ab5bb018SKeith M Wesolowski  *	core complex consists of four cores which each have two threads, for a
635ab5bb018SKeith M Wesolowski  *	total of 8 logical CPUs per complex. Unlike other generations,
636ab5bb018SKeith M Wesolowski  *	where all the logical CPUs in a given node share the L3 cache, here each
637ab5bb018SKeith M Wesolowski  *	core complex has its own shared L3 cache.
638ab5bb018SKeith M Wesolowski  *
639ab5bb018SKeith M Wesolowski  *	A further thing that we need to consider is that in some configurations,
640ab5bb018SKeith M Wesolowski  *	particularly with the Threadripper line of processors, not every die
641ab5bb018SKeith M Wesolowski  *	actually has its memory controllers wired up to actual memory channels.
642ab5bb018SKeith M Wesolowski  *	This means that some cores have memory attached to them and others
643ab5bb018SKeith M Wesolowski  *	don't.
644ab5bb018SKeith M Wesolowski  *
645ab5bb018SKeith M Wesolowski  *	To put Zen in perspective, consider the following images:
646ab5bb018SKeith M Wesolowski  *
647ab5bb018SKeith M Wesolowski  *      +--------------------------------------------------------+
648ab5bb018SKeith M Wesolowski  *      | Core Complex                                           |
649ab5bb018SKeith M Wesolowski  *      | +-------------------+    +-------------------+  +---+  |
650ab5bb018SKeith M Wesolowski  *      | | Core       +----+ |    | Core       +----+ |  |   |  |
651ab5bb018SKeith M Wesolowski  *      | | +--------+ | L2 | |    | +--------+ | L2 | |  |   |  |
652ab5bb018SKeith M Wesolowski  *      | | | Thread | +----+ |    | | Thread | +----+ |  |   |  |
653ab5bb018SKeith M Wesolowski  *      | | +--------+-+ +--+ |    | +--------+-+ +--+ |  | L |  |
654ab5bb018SKeith M Wesolowski  *      | |   | Thread | |L1| |    |   | Thread | |L1| |  | 3 |  |
655ab5bb018SKeith M Wesolowski  *      | |   +--------+ +--+ |    |   +--------+ +--+ |  |   |  |
656ab5bb018SKeith M Wesolowski  *      | +-------------------+    +-------------------+  | C |  |
657ab5bb018SKeith M Wesolowski  *      | +-------------------+    +-------------------+  | a |  |
658ab5bb018SKeith M Wesolowski  *      | | Core       +----+ |    | Core       +----+ |  | c |  |
659ab5bb018SKeith M Wesolowski  *      | | +--------+ | L2 | |    | +--------+ | L2 | |  | h |  |
660ab5bb018SKeith M Wesolowski  *      | | | Thread | +----+ |    | | Thread | +----+ |  | e |  |
661ab5bb018SKeith M Wesolowski  *      | | +--------+-+ +--+ |    | +--------+-+ +--+ |  |   |  |
662ab5bb018SKeith M Wesolowski  *      | |   | Thread | |L1| |    |   | Thread | |L1| |  |   |  |
663ab5bb018SKeith M Wesolowski  *      | |   +--------+ +--+ |    |   +--------+ +--+ |  |   |  |
664ab5bb018SKeith M Wesolowski  *      | +-------------------+    +-------------------+  +---+  |
665ab5bb018SKeith M Wesolowski  *      |                                                        |
666ab5bb018SKeith M Wesolowski  *	+--------------------------------------------------------+
667ab5bb018SKeith M Wesolowski  *
668ab5bb018SKeith M Wesolowski  *  This first image represents a single Zen core complex that consists of four
669ab5bb018SKeith M Wesolowski  *  cores.
670ab5bb018SKeith M Wesolowski  *
671ab5bb018SKeith M Wesolowski  *
672ab5bb018SKeith M Wesolowski  *	+--------------------------------------------------------+
673ab5bb018SKeith M Wesolowski  *	| Zeppelin Die                                           |
674ab5bb018SKeith M Wesolowski  *	|  +--------------------------------------------------+  |
675ab5bb018SKeith M Wesolowski  *	|  |         I/O Units (PCIe, SATA, USB, etc.)        |  |
676ab5bb018SKeith M Wesolowski  *	|  +--------------------------------------------------+  |
677ab5bb018SKeith M Wesolowski  *      |                           HH                           |
678ab5bb018SKeith M Wesolowski  *	|          +-----------+    HH    +-----------+          |
679ab5bb018SKeith M Wesolowski  *	|          |           |    HH    |           |          |
680ab5bb018SKeith M Wesolowski  *	|          |    Core   |==========|    Core   |          |
681ab5bb018SKeith M Wesolowski  *	|          |  Complex  |==========|  Complex  |          |
682ab5bb018SKeith M Wesolowski  *	|          |           |    HH    |           |          |
683ab5bb018SKeith M Wesolowski  *	|          +-----------+    HH    +-----------+          |
684ab5bb018SKeith M Wesolowski  *      |                           HH                           |
685ab5bb018SKeith M Wesolowski  *	|  +--------------------------------------------------+  |
686ab5bb018SKeith M Wesolowski  *	|  |                Memory Controller                 |  |
687ab5bb018SKeith M Wesolowski  *	|  +--------------------------------------------------+  |
688ab5bb018SKeith M Wesolowski  *      |                                                        |
689ab5bb018SKeith M Wesolowski  *	+--------------------------------------------------------+
690ab5bb018SKeith M Wesolowski  *
691ab5bb018SKeith M Wesolowski  *  This image represents a single Zeppelin Die. Note how both cores are
692ab5bb018SKeith M Wesolowski  *  connected to the same memory controller and I/O units. While each core
693ab5bb018SKeith M Wesolowski  *  complex has its own L3 cache as seen in the first image, they both have
694ab5bb018SKeith M Wesolowski  *  uniform access to memory.
695ab5bb018SKeith M Wesolowski  *
696ab5bb018SKeith M Wesolowski  *
697ab5bb018SKeith M Wesolowski  *                      PP                     PP
698ab5bb018SKeith M Wesolowski  *                      PP                     PP
699ab5bb018SKeith M Wesolowski  *           +----------PP---------------------PP---------+
700ab5bb018SKeith M Wesolowski  *           |          PP                     PP         |
701ab5bb018SKeith M Wesolowski  *           |    +-----------+          +-----------+    |
702ab5bb018SKeith M Wesolowski  *           |    |           |          |           |    |
703ab5bb018SKeith M Wesolowski  *       MMMMMMMMM|  Zeppelin |==========|  Zeppelin |MMMMMMMMM
704ab5bb018SKeith M Wesolowski  *       MMMMMMMMM|    Die    |==========|    Die    |MMMMMMMMM
705ab5bb018SKeith M Wesolowski  *           |    |           |          |           |    |
706ab5bb018SKeith M Wesolowski  *           |    +-----------+ooo    ...+-----------+    |
707ab5bb018SKeith M Wesolowski  *           |          HH      ooo  ...       HH         |
708ab5bb018SKeith M Wesolowski  *           |          HH        oo..         HH         |
709ab5bb018SKeith M Wesolowski  *           |          HH        ..oo         HH         |
710ab5bb018SKeith M Wesolowski  *           |          HH      ...  ooo       HH         |
711ab5bb018SKeith M Wesolowski  *           |    +-----------+...    ooo+-----------+    |
712ab5bb018SKeith M Wesolowski  *           |    |           |          |           |    |
713ab5bb018SKeith M Wesolowski  *       MMMMMMMMM|  Zeppelin |==========|  Zeppelin |MMMMMMMMM
714ab5bb018SKeith M Wesolowski  *       MMMMMMMMM|    Die    |==========|    Die    |MMMMMMMMM
715ab5bb018SKeith M Wesolowski  *           |    |           |          |           |    |
716ab5bb018SKeith M Wesolowski  *           |    +-----------+          +-----------+    |
717ab5bb018SKeith M Wesolowski  *           |          PP                     PP         |
718ab5bb018SKeith M Wesolowski  *           +----------PP---------------------PP---------+
719ab5bb018SKeith M Wesolowski  *                      PP                     PP
720ab5bb018SKeith M Wesolowski  *                      PP                     PP
721ab5bb018SKeith M Wesolowski  *
722ab5bb018SKeith M Wesolowski  *  This image represents a single Zen package. In this example, it has four
723ab5bb018SKeith M Wesolowski  *  Zeppelin dies, though some configurations only have a single one. In this
724ab5bb018SKeith M Wesolowski  *  example, each die is directly connected to the next. Also, each die is
725ab5bb018SKeith M Wesolowski  *  represented as being connected to memory by the 'M' character and connected
726ab5bb018SKeith M Wesolowski  *  to PCIe devices and other I/O, by the 'P' character. Because each Zeppelin
727ab5bb018SKeith M Wesolowski  *  die is made up of two core complexes, we have multiple different NUMA
728ab5bb018SKeith M Wesolowski  *  domains that we care about for these systems.
729ab5bb018SKeith M Wesolowski  *
730ab5bb018SKeith M Wesolowski  * ZEN 2
731ab5bb018SKeith M Wesolowski  *
732ab5bb018SKeith M Wesolowski  *	Zen 2 changes things in a dramatic way from Zen 1. Whereas in Zen 1
733ab5bb018SKeith M Wesolowski  *	each Zeppelin Die had its own I/O die, that has been moved out of the
734ab5bb018SKeith M Wesolowski  *	core complex in Zen 2. The actual core complex looks pretty similar, but
735ab5bb018SKeith M Wesolowski  *	now the die actually looks much simpler:
736ab5bb018SKeith M Wesolowski  *
737ab5bb018SKeith M Wesolowski  *      +--------------------------------------------------------+
738ab5bb018SKeith M Wesolowski  *      | Zen 2 Core Complex Die    HH                           |
739ab5bb018SKeith M Wesolowski  *      |                           HH                           |
740ab5bb018SKeith M Wesolowski  *      |          +-----------+    HH    +-----------+          |
741ab5bb018SKeith M Wesolowski  *      |          |           |    HH    |           |          |
742ab5bb018SKeith M Wesolowski  *      |          |    Core   |==========|    Core   |          |
743ab5bb018SKeith M Wesolowski  *      |          |  Complex  |==========|  Complex  |          |
744ab5bb018SKeith M Wesolowski  *      |          |           |    HH    |           |          |
745ab5bb018SKeith M Wesolowski  *      |          +-----------+    HH    +-----------+          |
746ab5bb018SKeith M Wesolowski  *      |                           HH                           |
747ab5bb018SKeith M Wesolowski  *      |                           HH                           |
748ab5bb018SKeith M Wesolowski  *      +--------------------------------------------------------+
749ab5bb018SKeith M Wesolowski  *
750ab5bb018SKeith M Wesolowski  *	From here, when we add the central I/O die, this changes things a bit.
751ab5bb018SKeith M Wesolowski  *	Each die is connected to the I/O die, rather than trying to interconnect
752ab5bb018SKeith M Wesolowski  *	them directly. The following image takes the same Zen 1 image that we
753ab5bb018SKeith M Wesolowski  *	had earlier and shows what it looks like with the I/O die instead:
754ab5bb018SKeith M Wesolowski  *
755ab5bb018SKeith M Wesolowski  *                                 PP    PP
756ab5bb018SKeith M Wesolowski  *                                 PP    PP
757ab5bb018SKeith M Wesolowski  *           +---------------------PP----PP---------------------+
758ab5bb018SKeith M Wesolowski  *           |                     PP    PP                     |
759ab5bb018SKeith M Wesolowski  *           |  +-----------+      PP    PP      +-----------+  |
760ab5bb018SKeith M Wesolowski  *           |  |           |      PP    PP      |           |  |
761ab5bb018SKeith M Wesolowski  *           |  |   Zen 2   |    +-PP----PP-+    |   Zen 2   |  |
762ab5bb018SKeith M Wesolowski  *           |  |    Die   _|    | PP    PP |    |_   Die    |  |
763ab5bb018SKeith M Wesolowski  *           |  |         |o|oooo|          |oooo|o|         |  |
764ab5bb018SKeith M Wesolowski  *           |  +-----------+    |          |    +-----------+  |
765ab5bb018SKeith M Wesolowski  *           |                   |   I/O    |                   |
766ab5bb018SKeith M Wesolowski  *       MMMMMMMMMMMMMMMMMMMMMMMMMM  Die   MMMMMMMMMMMMMMMMMMMMMMMMMM
767ab5bb018SKeith M Wesolowski  *       MMMMMMMMMMMMMMMMMMMMMMMMMM        MMMMMMMMMMMMMMMMMMMMMMMMMM
768ab5bb018SKeith M Wesolowski  *           |                   |          |                   |
769ab5bb018SKeith M Wesolowski  *       MMMMMMMMMMMMMMMMMMMMMMMMMM        MMMMMMMMMMMMMMMMMMMMMMMMMM
770ab5bb018SKeith M Wesolowski  *       MMMMMMMMMMMMMMMMMMMMMMMMMM        MMMMMMMMMMMMMMMMMMMMMMMMMM
771ab5bb018SKeith M Wesolowski  *           |                   |          |                   |
772ab5bb018SKeith M Wesolowski  *           |  +-----------+    |          |    +-----------+  |
773ab5bb018SKeith M Wesolowski  *           |  |         |o|oooo| PP    PP |oooo|o|         |  |
774ab5bb018SKeith M Wesolowski  *           |  |   Zen 2  -|    +-PP----PP-+    |-  Zen 2   |  |
775ab5bb018SKeith M Wesolowski  *           |  |    Die    |      PP    PP      |    Die    |  |
776ab5bb018SKeith M Wesolowski  *           |  |           |      PP    PP      |           |  |
777ab5bb018SKeith M Wesolowski  *           |  +-----------+      PP    PP      +-----------+  |
778ab5bb018SKeith M Wesolowski  *           |                     PP    PP                     |
779ab5bb018SKeith M Wesolowski  *           +---------------------PP----PP---------------------+
780ab5bb018SKeith M Wesolowski  *                                 PP    PP
781ab5bb018SKeith M Wesolowski  *                                 PP    PP
782ab5bb018SKeith M Wesolowski  *
783ab5bb018SKeith M Wesolowski  *	The above has four core complex dies installed, though the Zen 2 EPYC
784ab5bb018SKeith M Wesolowski  *	and ThreadRipper parts allow for up to eight, while the Ryzen parts
785ab5bb018SKeith M Wesolowski  *	generally only have one to two. The more notable difference here is how
786ab5bb018SKeith M Wesolowski  *	everything communicates. Note that memory and PCIe come out of the
787ab5bb018SKeith M Wesolowski  *	central die. This changes the way that one die accesses a resource. It
788ab5bb018SKeith M Wesolowski  *	basically always has to go to the I/O die, where as in Zen 1 it may have
789ab5bb018SKeith M Wesolowski  *	satisfied it locally. In general, this ends up being a better strategy
790ab5bb018SKeith M Wesolowski  *	for most things, though it is possible to still treat everything in four
791ab5bb018SKeith M Wesolowski  *	distinct NUMA domains with each Zen 2 die slightly closer to some memory
792ab5bb018SKeith M Wesolowski  *	and PCIe than otherwise. This also impacts the 'amdzen' nexus driver as
793ab5bb018SKeith M Wesolowski  *	now there is only one 'node' present.
794ab5bb018SKeith M Wesolowski  *
795ab5bb018SKeith M Wesolowski  * ZEN 3
796ab5bb018SKeith M Wesolowski  *
797ab5bb018SKeith M Wesolowski  *	From an architectural perspective, Zen 3 is a much smaller change from
798ab5bb018SKeith M Wesolowski  *	Zen 2 than Zen 2 was from Zen 1, though it makes up for most of that in
799ab5bb018SKeith M Wesolowski  *	its microarchitectural changes. The biggest thing for us is how the die
800ab5bb018SKeith M Wesolowski  *	changes. In Zen 1 and Zen 2, each core complex still had its own L3
801ab5bb018SKeith M Wesolowski  *	cache. However, in Zen 3, the L3 is now shared between the entire core
802ab5bb018SKeith M Wesolowski  *	complex die and is no longer partitioned between each core complex. This
803ab5bb018SKeith M Wesolowski  *	means that all cores on the die can share the same L3 cache. Otherwise,
804ab5bb018SKeith M Wesolowski  *	the general layout of the overall package with various core complexes
805ab5bb018SKeith M Wesolowski  *	and an I/O die stays the same. Here's what the Core Complex Die looks
806ab5bb018SKeith M Wesolowski  *	like in a bit more detail:
807ab5bb018SKeith M Wesolowski  *
808ab5bb018SKeith M Wesolowski  *               +-------------------------------------------------+
809ab5bb018SKeith M Wesolowski  *               | Zen 3 Core Complex Die                          |
810ab5bb018SKeith M Wesolowski  *               | +-------------------+    +-------------------+  |
811ab5bb018SKeith M Wesolowski  *               | | Core       +----+ |    | Core       +----+ |  |
812ab5bb018SKeith M Wesolowski  *               | | +--------+ | L2 | |    | +--------+ | L2 | |  |
813ab5bb018SKeith M Wesolowski  *               | | | Thread | +----+ |    | | Thread | +----+ |  |
814ab5bb018SKeith M Wesolowski  *               | | +--------+-+ +--+ |    | +--------+-+ +--+ |  |
815ab5bb018SKeith M Wesolowski  *               | |   | Thread | |L1| |    |   | Thread | |L1| |  |
816ab5bb018SKeith M Wesolowski  *               | |   +--------+ +--+ |    |   +--------+ +--+ |  |
817ab5bb018SKeith M Wesolowski  *               | +-------------------+    +-------------------+  |
818ab5bb018SKeith M Wesolowski  *               | +-------------------+    +-------------------+  |
819ab5bb018SKeith M Wesolowski  *               | | Core       +----+ |    | Core       +----+ |  |
820ab5bb018SKeith M Wesolowski  *               | | +--------+ | L2 | |    | +--------+ | L2 | |  |
821ab5bb018SKeith M Wesolowski  *               | | | Thread | +----+ |    | | Thread | +----+ |  |
822ab5bb018SKeith M Wesolowski  *               | | +--------+-+ +--+ |    | +--------+-+ +--+ |  |
823ab5bb018SKeith M Wesolowski  *               | |   | Thread | |L1| |    |   | Thread | |L1| |  |
824ab5bb018SKeith M Wesolowski  *               | |   +--------+ +--+ |    |   +--------+ +--+ |  |
825ab5bb018SKeith M Wesolowski  *               | +-------------------+    +-------------------+  |
826ab5bb018SKeith M Wesolowski  *               |                                                 |
827ab5bb018SKeith M Wesolowski  *               | +--------------------------------------------+  |
828ab5bb018SKeith M Wesolowski  *               | |                 L3 Cache                   |  |
829ab5bb018SKeith M Wesolowski  *               | +--------------------------------------------+  |
830ab5bb018SKeith M Wesolowski  *               |                                                 |
831ab5bb018SKeith M Wesolowski  *               | +-------------------+    +-------------------+  |
832ab5bb018SKeith M Wesolowski  *               | | Core       +----+ |    | Core       +----+ |  |
833ab5bb018SKeith M Wesolowski  *               | | +--------+ | L2 | |    | +--------+ | L2 | |  |
834ab5bb018SKeith M Wesolowski  *               | | | Thread | +----+ |    | | Thread | +----+ |  |
835ab5bb018SKeith M Wesolowski  *               | | +--------+-+ +--+ |    | +--------+-+ +--+ |  |
836ab5bb018SKeith M Wesolowski  *               | |   | Thread | |L1| |    |   | Thread | |L1| |  |
837ab5bb018SKeith M Wesolowski  *               | |   +--------+ +--+ |    |   +--------+ +--+ |  |
838ab5bb018SKeith M Wesolowski  *               | +-------------------+    +-------------------+  |
839ab5bb018SKeith M Wesolowski  *               | +-------------------+    +-------------------+  |
840ab5bb018SKeith M Wesolowski  *               | | Core       +----+ |    | Core       +----+ |  |
841ab5bb018SKeith M Wesolowski  *               | | +--------+ | L2 | |    | +--------+ | L2 | |  |
842ab5bb018SKeith M Wesolowski  *               | | | Thread | +----+ |    | | Thread | +----+ |  |
843ab5bb018SKeith M Wesolowski  *               | | +--------+-+ +--+ |    | +--------+-+ +--+ |  |
844ab5bb018SKeith M Wesolowski  *               | |   | Thread | |L1| |    |   | Thread | |L1| |  |
845ab5bb018SKeith M Wesolowski  *               | |   +--------+ +--+ |    |   +--------+ +--+ |  |
846ab5bb018SKeith M Wesolowski  *               | +-------------------+    +-------------------+  |
847ab5bb018SKeith M Wesolowski  *               +-------------------------------------------------+
848ab5bb018SKeith M Wesolowski  *
849ab5bb018SKeith M Wesolowski  *	While it is not pictured, there are connections from the die to the
850ab5bb018SKeith M Wesolowski  *	broader data fabric and additional functional blocks to support that
851ab5bb018SKeith M Wesolowski  *	communication and coherency.
852ab5bb018SKeith M Wesolowski  *
853ab5bb018SKeith M Wesolowski  * CPUID LEAVES
854ab5bb018SKeith M Wesolowski  *
855ab5bb018SKeith M Wesolowski  * There are a few different CPUID leaves that we can use to try and understand
856ab5bb018SKeith M Wesolowski  * the actual state of the world. As part of the introduction of family 0xf, AMD
857ab5bb018SKeith M Wesolowski  * added CPUID leaf 0x80000008. This leaf tells us the number of logical
858ab5bb018SKeith M Wesolowski  * processors that are in the system. Because families before Zen didn't have
859ab5bb018SKeith M Wesolowski  * SMT, this was always the number of cores that were in the system. However, it
860ab5bb018SKeith M Wesolowski  * should always be thought of as the number of logical threads to be consistent
861ab5bb018SKeith M Wesolowski  * between generations. In addition we also get the size of the APIC ID that is
862ab5bb018SKeith M Wesolowski  * used to represent the number of logical processors. This is important for
863ab5bb018SKeith M Wesolowski  * deriving topology information.
864ab5bb018SKeith M Wesolowski  *
865ab5bb018SKeith M Wesolowski  * In the Bulldozer family, AMD added leaf 0x8000001E. The information varies a
866ab5bb018SKeith M Wesolowski  * bit between Bulldozer and later families, but it is quite useful in
867ab5bb018SKeith M Wesolowski  * determining the topology information. Because this information has changed
868ab5bb018SKeith M Wesolowski  * across family generations, it's worth calling out what these mean
869ab5bb018SKeith M Wesolowski  * explicitly. The registers have the following meanings:
870ab5bb018SKeith M Wesolowski  *
871ab5bb018SKeith M Wesolowski  *	%eax	The APIC ID. The entire register is defined to have a 32-bit
872ab5bb018SKeith M Wesolowski  *		APIC ID, even though on systems without x2apic support, it will
873ab5bb018SKeith M Wesolowski  *		be limited to 8 bits.
874ab5bb018SKeith M Wesolowski  *
875ab5bb018SKeith M Wesolowski  *	%ebx	On Bulldozer-era systems this contains information about the
876ab5bb018SKeith M Wesolowski  *		number of cores that are in a compute unit (cores that share
877ab5bb018SKeith M Wesolowski  *		resources). It also contains a per-package compute unit ID that
878ab5bb018SKeith M Wesolowski  *		identifies which compute unit the logical CPU is a part of.
879ab5bb018SKeith M Wesolowski  *
880ab5bb018SKeith M Wesolowski  *		On Zen-era systems this instead contains the number of threads
881ab5bb018SKeith M Wesolowski  *		per core and the ID of the core that the logical CPU is a part
882ab5bb018SKeith M Wesolowski  *		of. Note, this ID is unique only to the package, it is not
883ab5bb018SKeith M Wesolowski  *		globally unique across the entire system.
884ab5bb018SKeith M Wesolowski  *
885ab5bb018SKeith M Wesolowski  *	%ecx	This contains the number of nodes that exist in the package. It
886ab5bb018SKeith M Wesolowski  *		also contains an ID that identifies which node the logical CPU
887ab5bb018SKeith M Wesolowski  *		is a part of.
888ab5bb018SKeith M Wesolowski  *
889ab5bb018SKeith M Wesolowski  * Finally, we also use cpuid leaf 0x8000001D to determine information about the
890ab5bb018SKeith M Wesolowski  * cache layout to determine which logical CPUs are sharing which caches.
891ab5bb018SKeith M Wesolowski  *
892ab5bb018SKeith M Wesolowski  * illumos Topology
893ab5bb018SKeith M Wesolowski  * ----------------
894ab5bb018SKeith M Wesolowski  *
895ab5bb018SKeith M Wesolowski  * Based on the above we synthesize the information into several different
896ab5bb018SKeith M Wesolowski  * variables that we store in the 'struct cpuid_info'. We'll go into the details
897ab5bb018SKeith M Wesolowski  * of what each member is supposed to represent and their uniqueness. In
898ab5bb018SKeith M Wesolowski  * general, there are two levels of uniqueness that we care about. We care about
899ab5bb018SKeith M Wesolowski  * an ID that is globally unique. That means that it will be unique across all
900ab5bb018SKeith M Wesolowski  * entities in the system. For example, the default logical CPU ID is globally
901ab5bb018SKeith M Wesolowski  * unique. On the other hand, there is some information that we only care about
902ab5bb018SKeith M Wesolowski  * being unique within the context of a single package / socket. Here are the
903ab5bb018SKeith M Wesolowski  * variables that we keep track of and their meaning.
904ab5bb018SKeith M Wesolowski  *
905ab5bb018SKeith M Wesolowski  * Several of the values that are asking for an identifier, with the exception
906ab5bb018SKeith M Wesolowski  * of cpi_apicid, are allowed to be synthetic.
907ab5bb018SKeith M Wesolowski  *
908ab5bb018SKeith M Wesolowski  *
909ab5bb018SKeith M Wesolowski  * cpi_apicid
910ab5bb018SKeith M Wesolowski  *
911ab5bb018SKeith M Wesolowski  *	This is the value of the CPU's APIC id. This should be the full 32-bit
912ab5bb018SKeith M Wesolowski  *	ID if the CPU is using the x2apic. Otherwise, it should be the 8-bit
913ab5bb018SKeith M Wesolowski  *	APIC ID. This value is globally unique between all logical CPUs across
914ab5bb018SKeith M Wesolowski  *	all packages. This is usually required by the APIC.
915ab5bb018SKeith M Wesolowski  *
916ab5bb018SKeith M Wesolowski  * cpi_chipid
917ab5bb018SKeith M Wesolowski  *
918ab5bb018SKeith M Wesolowski  *	This value indicates the ID of the package that the logical CPU is a
919ab5bb018SKeith M Wesolowski  *	part of. This value is allowed to be synthetic. It is usually derived by
920ab5bb018SKeith M Wesolowski  *	taking the CPU's APIC ID and determining how many bits are used to
921ab5bb018SKeith M Wesolowski  *	represent CPU cores in the package. All logical CPUs that are part of
922ab5bb018SKeith M Wesolowski  *	the same package must have the same value.
923ab5bb018SKeith M Wesolowski  *
924ab5bb018SKeith M Wesolowski  * cpi_coreid
925ab5bb018SKeith M Wesolowski  *
926ab5bb018SKeith M Wesolowski  *	This represents the ID of a CPU core. Two logical CPUs should only have
927ab5bb018SKeith M Wesolowski  *	the same cpi_coreid value if they are part of the same core. These
928ab5bb018SKeith M Wesolowski  *	values may be synthetic. On systems that support SMT, this value is
929ab5bb018SKeith M Wesolowski  *	usually derived from the APIC ID, otherwise it is often synthetic and
930ab5bb018SKeith M Wesolowski  *	just set to the value of the cpu_id in the cpu_t.
931ab5bb018SKeith M Wesolowski  *
932ab5bb018SKeith M Wesolowski  * cpi_pkgcoreid
933ab5bb018SKeith M Wesolowski  *
934ab5bb018SKeith M Wesolowski  *	This is similar to the cpi_coreid in that logical CPUs that are part of
935ab5bb018SKeith M Wesolowski  *	the same core should have the same ID. The main difference is that these
936ab5bb018SKeith M Wesolowski  *	values are only required to be unique to a given socket.
937ab5bb018SKeith M Wesolowski  *
938ab5bb018SKeith M Wesolowski  * cpi_clogid
939ab5bb018SKeith M Wesolowski  *
940ab5bb018SKeith M Wesolowski  *	This represents the logical ID of a logical CPU. This value should be
941ab5bb018SKeith M Wesolowski  *	unique within a given socket for each logical CPU. This is allowed to be
942ab5bb018SKeith M Wesolowski  *	synthetic, though it is usually based off of the CPU's apic ID. The
943ab5bb018SKeith M Wesolowski  *	broader system expects that logical CPUs that have are part of the same
944ab5bb018SKeith M Wesolowski  *	core have contiguous numbers. For example, if there were two threads per
945ab5bb018SKeith M Wesolowski  *	core, then the core IDs divided by two should be the same and the first
946ab5bb018SKeith M Wesolowski  *	modulus two should be zero and the second one. For example, IDs 4 and 5
947ab5bb018SKeith M Wesolowski  *	indicate two logical CPUs that are part of the same core. But IDs 5 and
948ab5bb018SKeith M Wesolowski  *	6 represent two logical CPUs that are part of different cores.
949ab5bb018SKeith M Wesolowski  *
950ab5bb018SKeith M Wesolowski  *	While it is common for the cpi_coreid and the cpi_clogid to be derived
951ab5bb018SKeith M Wesolowski  *	from the same source, strictly speaking, they don't have to be and the
952ab5bb018SKeith M Wesolowski  *	two values should be considered logically independent. One should not
953ab5bb018SKeith M Wesolowski  *	try to compare a logical CPU's cpi_coreid and cpi_clogid to determine
954ab5bb018SKeith M Wesolowski  *	some kind of relationship. While this is tempting, we've seen cases on
955ab5bb018SKeith M Wesolowski  *	AMD family 0xf where the system's cpu id is not related to its APIC ID.
956ab5bb018SKeith M Wesolowski  *
957ab5bb018SKeith M Wesolowski  * cpi_ncpu_per_chip
958ab5bb018SKeith M Wesolowski  *
959ab5bb018SKeith M Wesolowski  *	This value indicates the total number of logical CPUs that exist in the
960ab5bb018SKeith M Wesolowski  *	physical package. Critically, this is not the number of logical CPUs
961ab5bb018SKeith M Wesolowski  *	that exist for just the single core.
962ab5bb018SKeith M Wesolowski  *
963ab5bb018SKeith M Wesolowski  *	This value should be the same for all logical CPUs in the same package.
964ab5bb018SKeith M Wesolowski  *
965ab5bb018SKeith M Wesolowski  * cpi_ncore_per_chip
966ab5bb018SKeith M Wesolowski  *
967ab5bb018SKeith M Wesolowski  *	This value indicates the total number of physical CPU cores that exist
968ab5bb018SKeith M Wesolowski  *	in the package. The system compares this value with cpi_ncpu_per_chip to
969ab5bb018SKeith M Wesolowski  *	determine if simultaneous multi-threading (SMT) is enabled. When
970ab5bb018SKeith M Wesolowski  *	cpi_ncpu_per_chip equals cpi_ncore_per_chip, then there is no SMT and
971ab5bb018SKeith M Wesolowski  *	the X86FSET_HTT feature is not set. If this value is greater than one,
972ab5bb018SKeith M Wesolowski  *	than we consider the processor to have the feature X86FSET_CMP, to
973ab5bb018SKeith M Wesolowski  *	indicate that there is support for more than one core.
974ab5bb018SKeith M Wesolowski  *
975ab5bb018SKeith M Wesolowski  *	This value should be the same for all logical CPUs in the same package.
976ab5bb018SKeith M Wesolowski  *
977ab5bb018SKeith M Wesolowski  * cpi_procnodes_per_pkg
978ab5bb018SKeith M Wesolowski  *
979ab5bb018SKeith M Wesolowski  *	This value indicates the number of 'nodes' that exist in the package.
980ab5bb018SKeith M Wesolowski  *	When processors are actually a multi-chip module, this represents the
981ab5bb018SKeith M Wesolowski  *	number of such modules that exist in the package. Currently, on Intel
982ab5bb018SKeith M Wesolowski  *	based systems this member is always set to 1.
983ab5bb018SKeith M Wesolowski  *
984ab5bb018SKeith M Wesolowski  *	This value should be the same for all logical CPUs in the same package.
985ab5bb018SKeith M Wesolowski  *
986ab5bb018SKeith M Wesolowski  * cpi_procnodeid
987ab5bb018SKeith M Wesolowski  *
988ab5bb018SKeith M Wesolowski  *	This value indicates the ID of the node that the logical CPU is a part
989ab5bb018SKeith M Wesolowski  *	of. All logical CPUs that are in the same node must have the same value
990ab5bb018SKeith M Wesolowski  *	here. This value must be unique across all of the packages in the
991ab5bb018SKeith M Wesolowski  *	system.  On Intel based systems, this is currently set to the value in
992ab5bb018SKeith M Wesolowski  *	cpi_chipid because there is only one node.
993ab5bb018SKeith M Wesolowski  *
994ab5bb018SKeith M Wesolowski  * cpi_cores_per_compunit
995ab5bb018SKeith M Wesolowski  *
996ab5bb018SKeith M Wesolowski  *	This value indicates the number of cores that are part of a compute
997ab5bb018SKeith M Wesolowski  *	unit. See the AMD topology section for this. This member only has real
998ab5bb018SKeith M Wesolowski  *	meaning currently for AMD Bulldozer family processors. For all other
999ab5bb018SKeith M Wesolowski  *	processors, this should currently be set to 1.
1000ab5bb018SKeith M Wesolowski  *
1001ab5bb018SKeith M Wesolowski  * cpi_compunitid
1002ab5bb018SKeith M Wesolowski  *
1003ab5bb018SKeith M Wesolowski  *	This indicates the compute unit that the logical CPU belongs to. For
1004ab5bb018SKeith M Wesolowski  *	processors without AMD Bulldozer-style compute units this should be set
1005ab5bb018SKeith M Wesolowski  *	to the value of cpi_coreid.
1006ab5bb018SKeith M Wesolowski  *
1007ab5bb018SKeith M Wesolowski  * cpi_ncpu_shr_last_cache
1008ab5bb018SKeith M Wesolowski  *
1009ab5bb018SKeith M Wesolowski  *	This indicates the number of logical CPUs that are sharing the same last
1010ab5bb018SKeith M Wesolowski  *	level cache. This value should be the same for all CPUs that are sharing
1011ab5bb018SKeith M Wesolowski  *	that cache. The last cache refers to the cache that is closest to memory
1012ab5bb018SKeith M Wesolowski  *	and furthest away from the CPU.
1013ab5bb018SKeith M Wesolowski  *
1014ab5bb018SKeith M Wesolowski  * cpi_last_lvl_cacheid
1015ab5bb018SKeith M Wesolowski  *
1016ab5bb018SKeith M Wesolowski  *	This indicates the ID of the last cache that the logical CPU uses. This
1017ab5bb018SKeith M Wesolowski  *	cache is often shared between multiple logical CPUs and is the cache
1018ab5bb018SKeith M Wesolowski  *	that is closest to memory and furthest away from the CPU. This value
1019ab5bb018SKeith M Wesolowski  *	should be the same for a group of logical CPUs only if they actually
1020ab5bb018SKeith M Wesolowski  *	share the same last level cache. IDs should not overlap between
1021ab5bb018SKeith M Wesolowski  *	packages.
1022ab5bb018SKeith M Wesolowski  *
1023ab5bb018SKeith M Wesolowski  * cpi_ncore_bits
1024ab5bb018SKeith M Wesolowski  *
1025ab5bb018SKeith M Wesolowski  *	This indicates the number of bits that are required to represent all of
1026ab5bb018SKeith M Wesolowski  *	the cores in the system. As cores are derived based on their APIC IDs,
1027ab5bb018SKeith M Wesolowski  *	we aren't guaranteed a run of APIC IDs starting from zero. It's OK for
1028ab5bb018SKeith M Wesolowski  *	this value to be larger than the actual number of IDs that are present
1029ab5bb018SKeith M Wesolowski  *	in the system. This is used to size tables by the CMI framework. It is
1030ab5bb018SKeith M Wesolowski  *	only filled in for Intel and AMD CPUs.
1031ab5bb018SKeith M Wesolowski  *
1032ab5bb018SKeith M Wesolowski  * cpi_nthread_bits
1033ab5bb018SKeith M Wesolowski  *
1034ab5bb018SKeith M Wesolowski  *	This indicates the number of bits required to represent all of the IDs
1035ab5bb018SKeith M Wesolowski  *	that cover the logical CPUs that exist on a given core. It's OK for this
1036ab5bb018SKeith M Wesolowski  *	value to be larger than the actual number of IDs that are present in the
1037ab5bb018SKeith M Wesolowski  *	system.  This is used to size tables by the CMI framework. It is
1038ab5bb018SKeith M Wesolowski  *	only filled in for Intel and AMD CPUs.
1039ab5bb018SKeith M Wesolowski  *
1040ab5bb018SKeith M Wesolowski  * -----------
1041ab5bb018SKeith M Wesolowski  * Hypervisors
1042ab5bb018SKeith M Wesolowski  * -----------
1043ab5bb018SKeith M Wesolowski  *
1044ab5bb018SKeith M Wesolowski  * If trying to manage the differences between vendors wasn't bad enough, it can
1045ab5bb018SKeith M Wesolowski  * get worse thanks to our friend hardware virtualization. Hypervisors are given
1046ab5bb018SKeith M Wesolowski  * the ability to interpose on all cpuid instructions and change them to suit
1047ab5bb018SKeith M Wesolowski  * their purposes. In general, this is necessary as the hypervisor wants to be
1048ab5bb018SKeith M Wesolowski  * able to present a more uniform set of features or not necessarily give the
1049ab5bb018SKeith M Wesolowski  * guest operating system kernel knowledge of all features so it can be
1050ab5bb018SKeith M Wesolowski  * more easily migrated between systems.
1051ab5bb018SKeith M Wesolowski  *
1052ab5bb018SKeith M Wesolowski  * When it comes to trying to determine topology information, this can be a
1053ab5bb018SKeith M Wesolowski  * double edged sword. When a hypervisor doesn't actually implement a cpuid
1054ab5bb018SKeith M Wesolowski  * leaf, it'll often return all zeros. Because of that, you'll often see various
1055ab5bb018SKeith M Wesolowski  * checks scattered about fields being non-zero before we assume we can use
1056ab5bb018SKeith M Wesolowski  * them.
1057ab5bb018SKeith M Wesolowski  *
1058ab5bb018SKeith M Wesolowski  * When it comes to topology information, the hypervisor is often incentivized
1059ab5bb018SKeith M Wesolowski  * to lie to you about topology. This is because it doesn't always actually
1060ab5bb018SKeith M Wesolowski  * guarantee that topology at all. The topology path we take in the system
1061ab5bb018SKeith M Wesolowski  * depends on how the CPU advertises itself. If it advertises itself as an Intel
1062ab5bb018SKeith M Wesolowski  * or AMD CPU, then we basically do our normal path. However, when they don't
1063ab5bb018SKeith M Wesolowski  * use an actual vendor, then that usually turns into multiple one-core CPUs
1064ab5bb018SKeith M Wesolowski  * that we enumerate that are often on different sockets. The actual behavior
1065ab5bb018SKeith M Wesolowski  * depends greatly on what the hypervisor actually exposes to us.
1066ab5bb018SKeith M Wesolowski  *
1067ab5bb018SKeith M Wesolowski  * --------------------
1068ab5bb018SKeith M Wesolowski  * Exposing Information
1069ab5bb018SKeith M Wesolowski  * --------------------
1070ab5bb018SKeith M Wesolowski  *
1071ab5bb018SKeith M Wesolowski  * We expose CPUID information in three different forms in the system.
1072ab5bb018SKeith M Wesolowski  *
1073ab5bb018SKeith M Wesolowski  * The first is through the x86_featureset variable. This is used in conjunction
1074ab5bb018SKeith M Wesolowski  * with the is_x86_feature() function. This is queried by x86-specific functions
1075ab5bb018SKeith M Wesolowski  * to determine which features are or aren't present in the system and to make
1076ab5bb018SKeith M Wesolowski  * decisions based upon them. For example, users of this include everything from
1077ab5bb018SKeith M Wesolowski  * parts of the system dedicated to reliability, availability, and
1078ab5bb018SKeith M Wesolowski  * serviceability (RAS), to making decisions about how to handle security
1079ab5bb018SKeith M Wesolowski  * mitigations, to various x86-specific drivers. General purpose or
1080ab5bb018SKeith M Wesolowski  * architecture independent drivers should never be calling this function.
1081ab5bb018SKeith M Wesolowski  *
1082ab5bb018SKeith M Wesolowski  * The second means is through the auxiliary vector. The auxiliary vector is a
1083ab5bb018SKeith M Wesolowski  * series of tagged data that the kernel passes down to a user program when it
1084ab5bb018SKeith M Wesolowski  * begins executing. This information is used to indicate to programs what
1085ab5bb018SKeith M Wesolowski  * instruction set extensions are present. For example, information about the
1086ab5bb018SKeith M Wesolowski  * CPU supporting the machine check architecture (MCA) wouldn't be passed down
1087ab5bb018SKeith M Wesolowski  * since user programs cannot make use of it. However, things like the AVX
1088ab5bb018SKeith M Wesolowski  * instruction sets are. Programs use this information to make run-time
1089ab5bb018SKeith M Wesolowski  * decisions about what features they should use. As an example, the run-time
1090ab5bb018SKeith M Wesolowski  * link-editor (rtld) can relocate different functions depending on the hardware
1091ab5bb018SKeith M Wesolowski  * support available.
1092ab5bb018SKeith M Wesolowski  *
1093ab5bb018SKeith M Wesolowski  * The final form is through a series of accessor functions that all have the
1094ab5bb018SKeith M Wesolowski  * form cpuid_get*. This is used by a number of different subsystems in the
1095ab5bb018SKeith M Wesolowski  * kernel to determine more detailed information about what we're running on,
1096ab5bb018SKeith M Wesolowski  * topology information, etc. Some of these subsystems include processor groups
1097ab5bb018SKeith M Wesolowski  * (uts/common/os/pg.c.), CPU Module Interface (uts/i86pc/os/cmi.c), ACPI,
1098ab5bb018SKeith M Wesolowski  * microcode, and performance monitoring. These functions all ASSERT that the
1099ab5bb018SKeith M Wesolowski  * CPU they're being called on has reached a certain cpuid pass. If the passes
1100ab5bb018SKeith M Wesolowski  * are rearranged, then this needs to be adjusted.
1101ab5bb018SKeith M Wesolowski  *
1102ab5bb018SKeith M Wesolowski  * -----------------------------------------------
1103ab5bb018SKeith M Wesolowski  * Speculative Execution CPU Side Channel Security
1104ab5bb018SKeith M Wesolowski  * -----------------------------------------------
1105ab5bb018SKeith M Wesolowski  *
1106ab5bb018SKeith M Wesolowski  * With the advent of the Spectre and Meltdown attacks which exploit speculative
1107ab5bb018SKeith M Wesolowski  * execution in the CPU to create side channels there have been a number of
1108ab5bb018SKeith M Wesolowski  * different attacks and corresponding issues that the operating system needs to
1109ab5bb018SKeith M Wesolowski  * mitigate against. The following list is some of the common, but not
1110ab5bb018SKeith M Wesolowski  * exhaustive, set of issues that we know about and have done some or need to do
1111ab5bb018SKeith M Wesolowski  * more work in the system to mitigate against:
1112ab5bb018SKeith M Wesolowski  *
1113ab5bb018SKeith M Wesolowski  *   - Spectre v1
1114ab5bb018SKeith M Wesolowski  *   - swapgs (Spectre v1 variant)
1115ab5bb018SKeith M Wesolowski  *   - Spectre v2
11165a9c36deSDan McDonald  *     - Branch History Injection (BHI).
1117ab5bb018SKeith M Wesolowski  *   - Meltdown (Spectre v3)
1118ab5bb018SKeith M Wesolowski  *   - Rogue Register Read (Spectre v3a)
1119ab5bb018SKeith M Wesolowski  *   - Speculative Store Bypass (Spectre v4)
1120ab5bb018SKeith M Wesolowski  *   - ret2spec, SpectreRSB
1121ab5bb018SKeith M Wesolowski  *   - L1 Terminal Fault (L1TF)
1122ab5bb018SKeith M Wesolowski  *   - Microarchitectural Data Sampling (MDS)
11235cd084edSDan McDonald  *   - Register File Data Sampling (RFDS)
1124ab5bb018SKeith M Wesolowski  *
1125ab5bb018SKeith M Wesolowski  * Each of these requires different sets of mitigations and has different attack
1126ab5bb018SKeith M Wesolowski  * surfaces. For the most part, this discussion is about protecting the kernel
1127ab5bb018SKeith M Wesolowski  * from non-kernel executing environments such as user processes and hardware
1128ab5bb018SKeith M Wesolowski  * virtual machines. Unfortunately, there are a number of user vs. user
1129ab5bb018SKeith M Wesolowski  * scenarios that exist with these. The rest of this section will describe the
1130ab5bb018SKeith M Wesolowski  * overall approach that the system has taken to address these as well as their
1131ab5bb018SKeith M Wesolowski  * shortcomings. Unfortunately, not all of the above have been handled today.
1132ab5bb018SKeith M Wesolowski  *
1133ab5bb018SKeith M Wesolowski  * SPECTRE v2, ret2spec, SpectreRSB
1134ab5bb018SKeith M Wesolowski  *
1135ab5bb018SKeith M Wesolowski  * The second variant of the spectre attack focuses on performing branch target
1136ab5bb018SKeith M Wesolowski  * injection. This generally impacts indirect call instructions in the system.
1137651a12cbSRobert Mustacchi  * There are four different ways to mitigate this issue that are commonly
1138ab5bb018SKeith M Wesolowski  * described today:
1139ab5bb018SKeith M Wesolowski  *
1140ab5bb018SKeith M Wesolowski  *  1. Using Indirect Branch Restricted Speculation (IBRS).
1141ab5bb018SKeith M Wesolowski  *  2. Using Retpolines and RSB Stuffing
1142ab5bb018SKeith M Wesolowski  *  3. Using Enhanced Indirect Branch Restricted Speculation (eIBRS)
1143651a12cbSRobert Mustacchi  *  4. Using Automated Indirect Branch Restricted Speculation (AIBRS)
1144ab5bb018SKeith M Wesolowski  *
1145ab5bb018SKeith M Wesolowski  * IBRS uses a feature added to microcode to restrict speculation, among other
1146ab5bb018SKeith M Wesolowski  * things. This form of mitigation has not been used as it has been generally
1147ab5bb018SKeith M Wesolowski  * seen as too expensive and requires reactivation upon various transitions in
1148ab5bb018SKeith M Wesolowski  * the system.
1149ab5bb018SKeith M Wesolowski  *
1150ab5bb018SKeith M Wesolowski  * As a less impactful alternative to IBRS, retpolines were developed by
1151ab5bb018SKeith M Wesolowski  * Google. These basically require one to replace indirect calls with a specific
1152ab5bb018SKeith M Wesolowski  * trampoline that will cause speculation to fail and break the attack.
1153ab5bb018SKeith M Wesolowski  * Retpolines require compiler support. We always build with retpolines in the
1154ab5bb018SKeith M Wesolowski  * external thunk mode. This means that a traditional indirect call is replaced
1155ab5bb018SKeith M Wesolowski  * with a call to one of the __x86_indirect_thunk_<reg> functions. A side effect
1156ab5bb018SKeith M Wesolowski  * of this is that all indirect function calls are performed through a register.
1157ab5bb018SKeith M Wesolowski  *
1158ab5bb018SKeith M Wesolowski  * We have to use a common external location of the thunk and not inline it into
1159ab5bb018SKeith M Wesolowski  * the callsite so that way we can have a single place to patch these functions.
1160ab5bb018SKeith M Wesolowski  * As it turns out, we currently have two different forms of retpolines that
1161ab5bb018SKeith M Wesolowski  * exist in the system:
1162ab5bb018SKeith M Wesolowski  *
1163ab5bb018SKeith M Wesolowski  *  1. A full retpoline
1164ab5bb018SKeith M Wesolowski  *  2. A no-op version
1165ab5bb018SKeith M Wesolowski  *
1166ab5bb018SKeith M Wesolowski  * The first one is used in the general case. Historically, there was an
1167ab5bb018SKeith M Wesolowski  * AMD-specific optimized retopoline variant that was based around using a
1168ab5bb018SKeith M Wesolowski  * serializing lfence instruction; however, in March 2022 it was announced that
1169ab5bb018SKeith M Wesolowski  * this was actually still vulnerable to Spectre v2 and therefore we no longer
1170ab5bb018SKeith M Wesolowski  * use it and it is no longer available in the system.
1171ab5bb018SKeith M Wesolowski  *
1172ab5bb018SKeith M Wesolowski  * The third form described above is the most curious. It turns out that the way
1173ab5bb018SKeith M Wesolowski  * that retpolines are implemented is that they rely on how speculation is
1174ab5bb018SKeith M Wesolowski  * performed on a 'ret' instruction. Intel has continued to optimize this
1175ab5bb018SKeith M Wesolowski  * process (which is partly why we need to have return stack buffer stuffing,
1176ab5bb018SKeith M Wesolowski  * but more on that in a bit) and in processors starting with Cascade Lake
1177ab5bb018SKeith M Wesolowski  * on the server side, it's dangerous to rely on retpolines. Instead, a new
1178ab5bb018SKeith M Wesolowski  * mechanism has been introduced called Enhanced IBRS (eIBRS).
1179ab5bb018SKeith M Wesolowski  *
1180ab5bb018SKeith M Wesolowski  * Unlike IBRS, eIBRS is designed to be enabled once at boot and left on each
1181ab5bb018SKeith M Wesolowski  * physical core. However, if this is the case, we don't want to use retpolines
1182ab5bb018SKeith M Wesolowski  * any more. Therefore if eIBRS is present, we end up turning each retpoline
1183ab5bb018SKeith M Wesolowski  * function (called a thunk) into a jmp instruction. This means that we're still
1184ab5bb018SKeith M Wesolowski  * paying the cost of an extra jump to the external thunk, but it gives us
1185ab5bb018SKeith M Wesolowski  * flexibility and the ability to have a single kernel image that works across a
1186ab5bb018SKeith M Wesolowski  * wide variety of systems and hardware features.
1187ab5bb018SKeith M Wesolowski  *
1188ab5bb018SKeith M Wesolowski  * Unfortunately, this alone is insufficient. First, Skylake systems have
1189ab5bb018SKeith M Wesolowski  * additional speculation for the Return Stack Buffer (RSB) which is used to
1190ab5bb018SKeith M Wesolowski  * return from call instructions which retpolines take advantage of. However,
1191ab5bb018SKeith M Wesolowski  * this problem is not just limited to Skylake and is actually more pernicious.
1192ab5bb018SKeith M Wesolowski  * The SpectreRSB paper introduces several more problems that can arise with
1193ab5bb018SKeith M Wesolowski  * dealing with this. The RSB can be poisoned just like the indirect branch
1194ab5bb018SKeith M Wesolowski  * predictor. This means that one needs to clear the RSB when transitioning
1195ab5bb018SKeith M Wesolowski  * between two different privilege domains. Some examples include:
1196ab5bb018SKeith M Wesolowski  *
1197ab5bb018SKeith M Wesolowski  *  - Switching between two different user processes
1198ab5bb018SKeith M Wesolowski  *  - Going between user land and the kernel
1199ab5bb018SKeith M Wesolowski  *  - Returning to the kernel from a hardware virtual machine
1200ab5bb018SKeith M Wesolowski  *
1201ab5bb018SKeith M Wesolowski  * Mitigating this involves combining a couple of different things. The first is
1202ab5bb018SKeith M Wesolowski  * SMEP (supervisor mode execution protection) which was introduced in Ivy
1203ab5bb018SKeith M Wesolowski  * Bridge. When an RSB entry refers to a user address and we're executing in the
1204ab5bb018SKeith M Wesolowski  * kernel, speculation through it will be stopped when SMEP is enabled. This
1205ab5bb018SKeith M Wesolowski  * protects against a number of the different cases that we would normally be
1206ab5bb018SKeith M Wesolowski  * worried about such as when we enter the kernel from user land.
1207ab5bb018SKeith M Wesolowski  *
1208ab5bb018SKeith M Wesolowski  * To prevent against additional manipulation of the RSB from other contexts
1209ab5bb018SKeith M Wesolowski  * such as a non-root VMX context attacking the kernel we first look to
1210ab5bb018SKeith M Wesolowski  * enhanced IBRS. When eIBRS is present and enabled, then there should be
1211ab5bb018SKeith M Wesolowski  * nothing else that we need to do to protect the kernel at this time.
1212ab5bb018SKeith M Wesolowski  *
1213a6e309baSDan McDonald  * Unfortunately, not all eIBRS implementations are sufficient to guard
1214a6e309baSDan McDonald  * against RSB manipulations, so we still need to manually overwrite the
1215a6e309baSDan McDonald  * contents of the return stack buffer unless the hardware specifies we are
1216a6e309baSDan McDonald  * covered. We do this through the x86_rsb_stuff() function.  Currently this
1217a6e309baSDan McDonald  * is employed on context switch and vmx_exit. The x86_rsb_stuff() function is
1218a6e309baSDan McDonald  * disabled only when mitigations in general are, or if we have hardware
1219a6e309baSDan McDonald  * indicating no need for post-barrier RSB protections, either in one place
1220a6e309baSDan McDonald  * (old hardware), or on both (newer hardware).
1221ab5bb018SKeith M Wesolowski  *
1222ab5bb018SKeith M Wesolowski  * If SMEP is not present, then we would have to stuff the RSB every time we
1223ab5bb018SKeith M Wesolowski  * transitioned from user mode to the kernel, which isn't very practical right
1224ab5bb018SKeith M Wesolowski  * now.
1225ab5bb018SKeith M Wesolowski  *
1226ab5bb018SKeith M Wesolowski  * To fully protect user to user and vmx to vmx attacks from these classes of
1227ab5bb018SKeith M Wesolowski  * issues, we would also need to allow them to opt into performing an Indirect
1228ab5bb018SKeith M Wesolowski  * Branch Prediction Barrier (IBPB) on switch. This is not currently wired up.
1229ab5bb018SKeith M Wesolowski  *
1230651a12cbSRobert Mustacchi  * The fourth form of mitigation here is specific to AMD and is called Automated
1231651a12cbSRobert Mustacchi  * IBRS (AIBRS). This is similar in spirit to eIBRS; however rather than set the
1232651a12cbSRobert Mustacchi  * IBRS bit in MSR_IA32_SPEC_CTRL (0x48) we instead set a bit in the EFER
1233651a12cbSRobert Mustacchi  * (extended feature enable register) MSR. This bit basically says that IBRS
1234651a12cbSRobert Mustacchi  * acts as though it is always active when executing at CPL0 and when executing
1235651a12cbSRobert Mustacchi  * in the 'host' context when SEV-SNP is enabled.
1236651a12cbSRobert Mustacchi  *
1237651a12cbSRobert Mustacchi  * When this is active, AMD states that the RSB is cleared on VMEXIT and
1238651a12cbSRobert Mustacchi  * therefore it is unnecessary. While this handles RSB stuffing attacks from SVM
1239651a12cbSRobert Mustacchi  * to the kernel, we must still consider the remaining cases that exist, just
1240651a12cbSRobert Mustacchi  * like above. While traditionally AMD employed a 32 entry RSB allowing the
1241651a12cbSRobert Mustacchi  * traditional technique to work, this is not true on all CPUs. While a write to
1242651a12cbSRobert Mustacchi  * IBRS would clear the RSB if the processor supports more than 32 entries (but
1243651a12cbSRobert Mustacchi  * not otherwise), AMD states that as long as at leat a single 4 KiB unmapped
1244651a12cbSRobert Mustacchi  * guard page is present between user and kernel address spaces and SMEP is
1245651a12cbSRobert Mustacchi  * enabled, then there is no need to clear the RSB at all.
1246651a12cbSRobert Mustacchi  *
1247ab5bb018SKeith M Wesolowski  * By default, the system will enable RSB stuffing and the required variant of
1248ab5bb018SKeith M Wesolowski  * retpolines and store that information in the x86_spectrev2_mitigation value.
1249ab5bb018SKeith M Wesolowski  * This will be evaluated after a microcode update as well, though it is
1250ab5bb018SKeith M Wesolowski  * expected that microcode updates will not take away features. This may mean
1251ab5bb018SKeith M Wesolowski  * that a late loaded microcode may not end up in the optimal configuration
1252ab5bb018SKeith M Wesolowski  * (though this should be rare).
1253ab5bb018SKeith M Wesolowski  *
1254ab5bb018SKeith M Wesolowski  * Currently we do not build kmdb with retpolines or perform any additional side
1255ab5bb018SKeith M Wesolowski  * channel security mitigations for it. One complication with kmdb is that it
1256ab5bb018SKeith M Wesolowski  * requires its own retpoline thunks and it would need to adjust itself based on
1257ab5bb018SKeith M Wesolowski  * what the kernel does. The threat model of kmdb is more limited and therefore
1258ab5bb018SKeith M Wesolowski  * it may make more sense to investigate using prediction barriers as the whole
1259ab5bb018SKeith M Wesolowski  * system is only executing a single instruction at a time while in kmdb.
1260ab5bb018SKeith M Wesolowski  *
12615a9c36deSDan McDonald  * Branch History Injection (BHI)
12625a9c36deSDan McDonald  *
12635a9c36deSDan McDonald  * BHI is a specific form of SPECTREv2 where an attacker may manipulate branch
12645a9c36deSDan McDonald  * history before transitioning from user to supervisor mode (or from VMX
12655a9c36deSDan McDonald  * non-root/guest to root mode). The attacker can then exploit certain
12665a9c36deSDan McDonald  * compiler-generated code-sequences ("gadgets") to disclose information from
12675a9c36deSDan McDonald  * other contexts or domains.  Recent (late-2023/early-2024) research in
12685a9c36deSDan McDonald  * object code analysis discovered many more potential gadgets than what was
12695a9c36deSDan McDonald  * initially reported (which previously was confined to Linux use of
12705a9c36deSDan McDonald  * unprivileged eBPF).
12715a9c36deSDan McDonald  *
12725a9c36deSDan McDonald  * The BHI threat doesn't exist in processsors that predate eIBRS, or in AMD
12735a9c36deSDan McDonald  * ones. Some eIBRS processors have the ability to disable branch history in
12745a9c36deSDan McDonald  * certain (but not all) cases using an MSR write. eIBRS processors that don't
12755a9c36deSDan McDonald  * have the ability to disable must use a software sequence to scrub the
12765a9c36deSDan McDonald  * branch history buffer.
12775a9c36deSDan McDonald  *
12785a9c36deSDan McDonald  * BHI_DIS_S (the aforementioned MSR) prevents ring 0 from ring 3 (VMX guest
12795a9c36deSDan McDonald  * or VMX root). It does not protect different user processes from each other,
12805a9c36deSDan McDonald  * or ring 3 VMX guest from ring 3 VMX root or vice versa.
12815a9c36deSDan McDonald  *
12825a9c36deSDan McDonald  * The BHI clearing sequence prevents user exploiting kernel gadgets, and user
12835a9c36deSDan McDonald  * A's use of user B's gadgets.
12845a9c36deSDan McDonald  *
12855a9c36deSDan McDonald  * SMEP and eIBRS are a continuing defense-in-depth measure protecting the
12865a9c36deSDan McDonald  * kernel.
12875a9c36deSDan McDonald  *
1288ab5bb018SKeith M Wesolowski  * SPECTRE v1, v4
1289ab5bb018SKeith M Wesolowski  *
1290ab5bb018SKeith M Wesolowski  * The v1 and v4 variants of spectre are not currently mitigated in the
1291ab5bb018SKeith M Wesolowski  * system and require other classes of changes to occur in the code.
1292ab5bb018SKeith M Wesolowski  *
1293ab5bb018SKeith M Wesolowski  * SPECTRE v1 (SWAPGS VARIANT)
1294ab5bb018SKeith M Wesolowski  *
1295ab5bb018SKeith M Wesolowski  * The class of Spectre v1 vulnerabilities aren't all about bounds checks, but
1296ab5bb018SKeith M Wesolowski  * can generally affect any branch-dependent code. The swapgs issue is one
1297ab5bb018SKeith M Wesolowski  * variant of this. If we are coming in from userspace, we can have code like
1298ab5bb018SKeith M Wesolowski  * this:
1299ab5bb018SKeith M Wesolowski  *
1300ab5bb018SKeith M Wesolowski  *	cmpw	$KCS_SEL, REGOFF_CS(%rsp)
1301ab5bb018SKeith M Wesolowski  *	je	1f
1302ab5bb018SKeith M Wesolowski  *	movq	$0, REGOFF_SAVFP(%rsp)
1303ab5bb018SKeith M Wesolowski  *	swapgs
1304ab5bb018SKeith M Wesolowski  *	1:
1305ab5bb018SKeith M Wesolowski  *	movq	%gs:CPU_THREAD, %rax
1306ab5bb018SKeith M Wesolowski  *
1307ab5bb018SKeith M Wesolowski  * If an attacker can cause a mis-speculation of the branch here, we could skip
1308ab5bb018SKeith M Wesolowski  * the needed swapgs, and use the /user/ %gsbase as the base of the %gs-based
1309ab5bb018SKeith M Wesolowski  * load. If subsequent code can act as the usual Spectre cache gadget, this
1310ab5bb018SKeith M Wesolowski  * would potentially allow KPTI bypass. To fix this, we need an lfence prior to
1311ab5bb018SKeith M Wesolowski  * any use of the %gs override.
1312ab5bb018SKeith M Wesolowski  *
1313ab5bb018SKeith M Wesolowski  * The other case is also an issue: if we're coming into a trap from kernel
1314ab5bb018SKeith M Wesolowski  * space, we could mis-speculate and swapgs the user %gsbase back in prior to
1315ab5bb018SKeith M Wesolowski  * using it. AMD systems are not vulnerable to this version, as a swapgs is
1316ab5bb018SKeith M Wesolowski  * serializing with respect to subsequent uses. But as AMD /does/ need the other
1317ab5bb018SKeith M Wesolowski  * case, and the fix is the same in both cases (an lfence at the branch target
1318ab5bb018SKeith M Wesolowski  * 1: in this example), we'll just do it unconditionally.
1319ab5bb018SKeith M Wesolowski  *
1320ab5bb018SKeith M Wesolowski  * Note that we don't enable user-space "wrgsbase" via CR4_FSGSBASE, making it
1321ab5bb018SKeith M Wesolowski  * harder for user-space to actually set a useful %gsbase value: although it's
1322ab5bb018SKeith M Wesolowski  * not clear, it might still be feasible via lwp_setprivate(), though, so we
1323ab5bb018SKeith M Wesolowski  * mitigate anyway.
1324ab5bb018SKeith M Wesolowski  *
1325ab5bb018SKeith M Wesolowski  * MELTDOWN
1326ab5bb018SKeith M Wesolowski  *
1327ab5bb018SKeith M Wesolowski  * Meltdown, or spectre v3, allowed a user process to read any data in their
1328ab5bb018SKeith M Wesolowski  * address space regardless of whether or not the page tables in question
1329ab5bb018SKeith M Wesolowski  * allowed the user to have the ability to read them. The solution to meltdown
1330ab5bb018SKeith M Wesolowski  * is kernel page table isolation. In this world, there are two page tables that
1331ab5bb018SKeith M Wesolowski  * are used for a process, one in user land and one in the kernel. To implement
1332ab5bb018SKeith M Wesolowski  * this we use per-CPU page tables and switch between the user and kernel
1333ab5bb018SKeith M Wesolowski  * variants when entering and exiting the kernel.  For more information about
1334ab5bb018SKeith M Wesolowski  * this process and how the trampolines work, please see the big theory
1335ab5bb018SKeith M Wesolowski  * statements and additional comments in:
1336ab5bb018SKeith M Wesolowski  *
1337ab5bb018SKeith M Wesolowski  *  - uts/i86pc/ml/kpti_trampolines.s
1338ab5bb018SKeith M Wesolowski  *  - uts/i86pc/vm/hat_i86.c
1339ab5bb018SKeith M Wesolowski  *
1340ab5bb018SKeith M Wesolowski  * While Meltdown only impacted Intel systems and there are also Intel systems
1341ab5bb018SKeith M Wesolowski  * that have Meltdown fixed (called Rogue Data Cache Load), we always have
1342ab5bb018SKeith M Wesolowski  * kernel page table isolation enabled. While this may at first seem weird, an
1343ab5bb018SKeith M Wesolowski  * important thing to remember is that you can't speculatively read an address
1344ab5bb018SKeith M Wesolowski  * if it's never in your page table at all. Having user processes without kernel
1345ab5bb018SKeith M Wesolowski  * pages present provides us with an important layer of defense in the kernel
1346ab5bb018SKeith M Wesolowski  * against any other side channel attacks that exist and have yet to be
1347ab5bb018SKeith M Wesolowski  * discovered. As such, kernel page table isolation (KPTI) is always enabled by
1348ab5bb018SKeith M Wesolowski  * default, no matter the x86 system.
1349ab5bb018SKeith M Wesolowski  *
1350ab5bb018SKeith M Wesolowski  * L1 TERMINAL FAULT
1351ab5bb018SKeith M Wesolowski  *
1352ab5bb018SKeith M Wesolowski  * L1 Terminal Fault (L1TF) takes advantage of an issue in how speculative
1353ab5bb018SKeith M Wesolowski  * execution uses page table entries. Effectively, it is two different problems.
1354ab5bb018SKeith M Wesolowski  * The first is that it ignores the not present bit in the page table entries
1355ab5bb018SKeith M Wesolowski  * when performing speculative execution. This means that something can
1356ab5bb018SKeith M Wesolowski  * speculatively read the listed physical address if it's present in the L1
1357ab5bb018SKeith M Wesolowski  * cache under certain conditions (see Intel's documentation for the full set of
1358ab5bb018SKeith M Wesolowski  * conditions). Secondly, this can be used to bypass hardware virtualization
1359ab5bb018SKeith M Wesolowski  * extended page tables (EPT) that are part of Intel's hardware virtual machine
1360ab5bb018SKeith M Wesolowski  * instructions.
1361ab5bb018SKeith M Wesolowski  *
1362ab5bb018SKeith M Wesolowski  * For the non-hardware virtualized case, this is relatively easy to deal with.
1363ab5bb018SKeith M Wesolowski  * We must make sure that all unmapped pages have an address of zero. This means
1364ab5bb018SKeith M Wesolowski  * that they could read the first 4k of physical memory; however, we never use
1365ab5bb018SKeith M Wesolowski  * that first page in the operating system and always skip putting it in our
1366ab5bb018SKeith M Wesolowski  * memory map, even if firmware tells us we can use it in our memory map. While
1367ab5bb018SKeith M Wesolowski  * other systems try to put extra metadata in the address and reserved bits,
1368ab5bb018SKeith M Wesolowski  * which led to this being problematic in those cases, we do not.
1369ab5bb018SKeith M Wesolowski  *
1370ab5bb018SKeith M Wesolowski  * For hardware virtual machines things are more complicated. Because they can
1371ab5bb018SKeith M Wesolowski  * construct their own page tables, it isn't hard for them to perform this
1372ab5bb018SKeith M Wesolowski  * attack against any physical address. The one wrinkle is that this physical
1373ab5bb018SKeith M Wesolowski  * address must be in the L1 data cache. Thus Intel added an MSR that we can use
1374ab5bb018SKeith M Wesolowski  * to flush the L1 data cache. We wrap this up in the function
1375ab5bb018SKeith M Wesolowski  * spec_uarch_flush(). This function is also used in the mitigation of
1376ab5bb018SKeith M Wesolowski  * microarchitectural data sampling (MDS) discussed later on. Kernel based
1377ab5bb018SKeith M Wesolowski  * hypervisors such as KVM or bhyve are responsible for performing this before
1378ab5bb018SKeith M Wesolowski  * entering the guest.
1379ab5bb018SKeith M Wesolowski  *
1380ab5bb018SKeith M Wesolowski  * Because this attack takes place in the L1 cache, there's another wrinkle
1381ab5bb018SKeith M Wesolowski  * here. The L1 cache is shared between all logical CPUs in a core in most Intel
1382ab5bb018SKeith M Wesolowski  * designs. This means that when a thread enters a hardware virtualized context
1383ab5bb018SKeith M Wesolowski  * and flushes the L1 data cache, the other thread on the processor may then go
1384ab5bb018SKeith M Wesolowski  * ahead and put new data in it that can be potentially attacked. While one
1385ab5bb018SKeith M Wesolowski  * solution is to disable SMT on the system, another option that is available is
1386ab5bb018SKeith M Wesolowski  * to use a feature for hardware virtualization called 'SMT exclusion'. This
1387ab5bb018SKeith M Wesolowski  * goes through and makes sure that if a HVM is being scheduled on one thread,
1388ab5bb018SKeith M Wesolowski  * then the thing on the other thread is from the same hardware virtual machine.
1389ab5bb018SKeith M Wesolowski  * If an interrupt comes in or the guest exits to the broader system, then the
1390ab5bb018SKeith M Wesolowski  * other SMT thread will be kicked out.
1391ab5bb018SKeith M Wesolowski  *
1392ab5bb018SKeith M Wesolowski  * L1TF can be fully mitigated by hardware. If the RDCL_NO feature is set in the
1393ab5bb018SKeith M Wesolowski  * architecture capabilities MSR (MSR_IA32_ARCH_CAPABILITIES), then we will not
1394ab5bb018SKeith M Wesolowski  * perform L1TF related mitigations.
1395ab5bb018SKeith M Wesolowski  *
1396ab5bb018SKeith M Wesolowski  * MICROARCHITECTURAL DATA SAMPLING
1397ab5bb018SKeith M Wesolowski  *
1398ab5bb018SKeith M Wesolowski  * Microarchitectural data sampling (MDS) is a combination of four discrete
1399ab5bb018SKeith M Wesolowski  * vulnerabilities that are similar issues affecting various parts of the CPU's
1400ab5bb018SKeith M Wesolowski  * microarchitectural implementation around load, store, and fill buffers.
1401ab5bb018SKeith M Wesolowski  * Specifically it is made up of the following subcomponents:
1402ab5bb018SKeith M Wesolowski  *
1403ab5bb018SKeith M Wesolowski  *  1. Microarchitectural Store Buffer Data Sampling (MSBDS)
1404ab5bb018SKeith M Wesolowski  *  2. Microarchitectural Fill Buffer Data Sampling (MFBDS)
1405ab5bb018SKeith M Wesolowski  *  3. Microarchitectural Load Port Data Sampling (MLPDS)
1406ab5bb018SKeith M Wesolowski  *  4. Microarchitectural Data Sampling Uncacheable Memory (MDSUM)
1407ab5bb018SKeith M Wesolowski  *
1408ab5bb018SKeith M Wesolowski  * To begin addressing these, Intel has introduced another feature in microcode
1409ab5bb018SKeith M Wesolowski  * called MD_CLEAR. This changes the verw instruction to operate in a different
1410ab5bb018SKeith M Wesolowski  * way. This allows us to execute the verw instruction in a particular way to
1411ab5bb018SKeith M Wesolowski  * flush the state of the affected parts. The L1TF L1D flush mechanism is also
1412ab5bb018SKeith M Wesolowski  * updated when this microcode is present to flush this state.
1413ab5bb018SKeith M Wesolowski  *
1414ab5bb018SKeith M Wesolowski  * Primarily we need to flush this state whenever we transition from the kernel
1415ab5bb018SKeith M Wesolowski  * to a less privileged context such as user mode or an HVM guest. MSBDS is a
1416ab5bb018SKeith M Wesolowski  * little bit different. Here the structures are statically sized when a logical
1417ab5bb018SKeith M Wesolowski  * CPU is in use and resized when it goes to sleep. Therefore, we also need to
1418ab5bb018SKeith M Wesolowski  * flush the microarchitectural state before the CPU goes idles by calling hlt,
1419ab5bb018SKeith M Wesolowski  * mwait, or another ACPI method. To perform these flushes, we call
1420ab5bb018SKeith M Wesolowski  * x86_md_clear() at all of these transition points.
1421ab5bb018SKeith M Wesolowski  *
1422ab5bb018SKeith M Wesolowski  * If hardware enumerates RDCL_NO, indicating that it is not vulnerable to L1TF,
1423ab5bb018SKeith M Wesolowski  * then we change the spec_uarch_flush() function to point to x86_md_clear(). If
1424ab5bb018SKeith M Wesolowski  * MDS_NO has been set, then this is fully mitigated and x86_md_clear() becomes
1425ab5bb018SKeith M Wesolowski  * a no-op.
1426ab5bb018SKeith M Wesolowski  *
1427ab5bb018SKeith M Wesolowski  * Unfortunately, with this issue hyperthreading rears its ugly head. In
1428ab5bb018SKeith M Wesolowski  * particular, everything we've discussed above is only valid for a single
1429ab5bb018SKeith M Wesolowski  * thread executing on a core. In the case where you have hyper-threading
1430ab5bb018SKeith M Wesolowski  * present, this attack can be performed between threads. The theoretical fix
1431ab5bb018SKeith M Wesolowski  * for this is to ensure that both threads are always in the same security
1432ab5bb018SKeith M Wesolowski  * domain. This means that they are executing in the same ring and mutually
1433ab5bb018SKeith M Wesolowski  * trust each other. Practically speaking, this would mean that a system call
1434ab5bb018SKeith M Wesolowski  * would have to issue an inter-processor interrupt (IPI) to the other thread.
1435ab5bb018SKeith M Wesolowski  * Rather than implement this, we recommend that one disables hyper-threading
1436ab5bb018SKeith M Wesolowski  * through the use of psradm -aS.
1437ab5bb018SKeith M Wesolowski  *
1438ab5bb018SKeith M Wesolowski  * TSX ASYNCHRONOUS ABORT
1439ab5bb018SKeith M Wesolowski  *
1440ab5bb018SKeith M Wesolowski  * TSX Asynchronous Abort (TAA) is another side-channel vulnerability that
1441ab5bb018SKeith M Wesolowski  * behaves like MDS, but leverages Intel's transactional instructions as another
1442ab5bb018SKeith M Wesolowski  * vector. Effectively, when a transaction hits one of these cases (unmapped
1443ab5bb018SKeith M Wesolowski  * page, various cache snoop activity, etc.) then the same data can be exposed
1444ab5bb018SKeith M Wesolowski  * as in the case of MDS. This means that you can attack your twin.
1445ab5bb018SKeith M Wesolowski  *
1446ab5bb018SKeith M Wesolowski  * Intel has described that there are two different ways that we can mitigate
1447ab5bb018SKeith M Wesolowski  * this problem on affected processors:
1448ab5bb018SKeith M Wesolowski  *
1449ab5bb018SKeith M Wesolowski  *   1) We can use the same techniques used to deal with MDS. Flushing the
1450ab5bb018SKeith M Wesolowski  *      microarchitectural buffers and disabling hyperthreading will mitigate
1451ab5bb018SKeith M Wesolowski  *      this in the same way.
1452ab5bb018SKeith M Wesolowski  *
1453ab5bb018SKeith M Wesolowski  *   2) Using microcode to disable TSX.
1454ab5bb018SKeith M Wesolowski  *
1455ab5bb018SKeith M Wesolowski  * Now, most processors that are subject to MDS (as in they don't have MDS_NO in
1456ab5bb018SKeith M Wesolowski  * the IA32_ARCH_CAPABILITIES MSR) will not receive microcode to disable TSX.
1457ab5bb018SKeith M Wesolowski  * That's OK as we're already doing all such mitigations. On the other hand,
1458ab5bb018SKeith M Wesolowski  * processors with MDS_NO are all supposed to receive microcode updates that
1459ab5bb018SKeith M Wesolowski  * enumerate support for disabling TSX. In general, we'd rather use this method
1460ab5bb018SKeith M Wesolowski  * when available as it doesn't require disabling hyperthreading to be
1461ab5bb018SKeith M Wesolowski  * effective. Currently we basically are relying on microcode for processors
1462ab5bb018SKeith M Wesolowski  * that enumerate MDS_NO.
1463ab5bb018SKeith M Wesolowski  *
14645cd084edSDan McDonald  * Another MDS-variant in a few select Intel Atom CPUs is Register File Data
14655cd084edSDan McDonald  * Sampling: RFDS. This allows an attacker to sample values that were in any
14665cd084edSDan McDonald  * of integer, floating point, or vector registers. This was discovered by
14675cd084edSDan McDonald  * Intel during internal validation work.  The existence of the RFDS_NO
14685cd084edSDan McDonald  * capability, or the LACK of a RFDS_CLEAR capability, means we do not have to
14695cd084edSDan McDonald  * act. Intel has said some CPU models immune to RFDS MAY NOT enumerate
14705cd084edSDan McDonald  * RFDS_NO. If RFDS_NO is not set, but RFDS_CLEAR is, we must set x86_md_clear,
14715cd084edSDan McDonald  * and make sure it's using VERW. Unlike MDS, RFDS can't be helped by the
14725cd084edSDan McDonald  * MSR that L1D uses.
14735cd084edSDan McDonald  *
1474ab5bb018SKeith M Wesolowski  * The microcode features are enumerated as part of the IA32_ARCH_CAPABILITIES.
1475ab5bb018SKeith M Wesolowski  * When bit 7 (IA32_ARCH_CAP_TSX_CTRL) is present, then we are given two
1476ab5bb018SKeith M Wesolowski  * different powers. The first allows us to cause all transactions to
1477ab5bb018SKeith M Wesolowski  * immediately abort. The second gives us a means of disabling TSX completely,
1478ab5bb018SKeith M Wesolowski  * which includes removing it from cpuid. If we have support for this in
1479ab5bb018SKeith M Wesolowski  * microcode during the first cpuid pass, then we'll disable TSX completely such
1480ab5bb018SKeith M Wesolowski  * that user land never has a chance to observe the bit. However, if we are late
1481ab5bb018SKeith M Wesolowski  * loading the microcode, then we must use the functionality to cause
1482ab5bb018SKeith M Wesolowski  * transactions to automatically abort. This is necessary for user land's sake.
1483ab5bb018SKeith M Wesolowski  * Once a program sees a cpuid bit, it must not be taken away.
1484ab5bb018SKeith M Wesolowski  *
1485ab5bb018SKeith M Wesolowski  * We track whether or not we should do this based on what cpuid pass we're in.
1486ab5bb018SKeith M Wesolowski  * Whenever we hit cpuid_scan_security() on the boot CPU and we're still on pass
1487ab5bb018SKeith M Wesolowski  * 1 of the cpuid logic, then we can completely turn off TSX. Notably this
1488ab5bb018SKeith M Wesolowski  * should happen twice. Once in the normal cpuid_pass_basic() code and then a
1489ab5bb018SKeith M Wesolowski  * second time after we do the initial microcode update.  As a result we need to
1490ab5bb018SKeith M Wesolowski  * be careful in cpuid_apply_tsx() to only use the MSR if we've loaded a
1491ab5bb018SKeith M Wesolowski  * suitable microcode on the current CPU (which happens prior to
1492ab5bb018SKeith M Wesolowski  * cpuid_pass_ucode()).
1493ab5bb018SKeith M Wesolowski  *
1494ab5bb018SKeith M Wesolowski  * If TAA has been fixed, then it will be enumerated in IA32_ARCH_CAPABILITIES
1495ab5bb018SKeith M Wesolowski  * as TAA_NO. In such a case, we will still disable TSX: it's proven to be an
1496ab5bb018SKeith M Wesolowski  * unfortunate feature in a number of ways, and taking the opportunity to
1497ab5bb018SKeith M Wesolowski  * finally be able to turn it off is likely to be of benefit in the future.
1498ab5bb018SKeith M Wesolowski  *
1499ab5bb018SKeith M Wesolowski  * SUMMARY
1500ab5bb018SKeith M Wesolowski  *
1501ab5bb018SKeith M Wesolowski  * The following table attempts to summarize the mitigations for various issues
1502ab5bb018SKeith M Wesolowski  * and what's done in various places:
1503ab5bb018SKeith M Wesolowski  *
1504ab5bb018SKeith M Wesolowski  *  - Spectre v1: Not currently mitigated
1505ab5bb018SKeith M Wesolowski  *  - swapgs: lfences after swapgs paths
1506651a12cbSRobert Mustacchi  *  - Spectre v2: Retpolines/RSB Stuffing or eIBRS/AIBRS if HW support
1507ab5bb018SKeith M Wesolowski  *  - Meltdown: Kernel Page Table Isolation
1508ab5bb018SKeith M Wesolowski  *  - Spectre v3a: Updated CPU microcode
1509ab5bb018SKeith M Wesolowski  *  - Spectre v4: Not currently mitigated
1510ab5bb018SKeith M Wesolowski  *  - SpectreRSB: SMEP and RSB Stuffing
1511ab5bb018SKeith M Wesolowski  *  - L1TF: spec_uarch_flush, SMT exclusion, requires microcode
1512ab5bb018SKeith M Wesolowski  *  - MDS: x86_md_clear, requires microcode, disabling SMT
1513ab5bb018SKeith M Wesolowski  *  - TAA: x86_md_clear and disabling SMT OR microcode and disabling TSX
15145cd084edSDan McDonald  *  - RFDS: microcode with x86_md_clear if RFDS_CLEAR set and RFDS_NO not.
15155a9c36deSDan McDonald  *  - BHI: software sequence, and use of BHI_DIS_S if microcode has it.
1516ab5bb018SKeith M Wesolowski  *
1517ab5bb018SKeith M Wesolowski  * The following table indicates the x86 feature set bits that indicate that a
1518ab5bb018SKeith M Wesolowski  * given problem has been solved or a notable feature is present:
1519ab5bb018SKeith M Wesolowski  *
1520ab5bb018SKeith M Wesolowski  *  - RDCL_NO: Meltdown, L1TF, MSBDS subset of MDS
1521ab5bb018SKeith M Wesolowski  *  - MDS_NO: All forms of MDS
1522ab5bb018SKeith M Wesolowski  *  - TAA_NO: TAA
15235cd084edSDan McDonald  *  - RFDS_NO: RFDS
15245a9c36deSDan McDonald  *  - BHI_NO: BHI
1525ab5bb018SKeith M Wesolowski  */
1526ab5bb018SKeith M Wesolowski 
1527ab5bb018SKeith M Wesolowski #include <sys/types.h>
1528ab5bb018SKeith M Wesolowski #include <sys/archsystm.h>
1529ab5bb018SKeith M Wesolowski #include <sys/x86_archext.h>
1530ab5bb018SKeith M Wesolowski #include <sys/kmem.h>
1531ab5bb018SKeith M Wesolowski #include <sys/systm.h>
1532ab5bb018SKeith M Wesolowski #include <sys/cmn_err.h>
1533ab5bb018SKeith M Wesolowski #include <sys/sunddi.h>
1534ab5bb018SKeith M Wesolowski #include <sys/sunndi.h>
1535ab5bb018SKeith M Wesolowski #include <sys/cpuvar.h>
1536ab5bb018SKeith M Wesolowski #include <sys/processor.h>
1537ab5bb018SKeith M Wesolowski #include <sys/sysmacros.h>
1538ab5bb018SKeith M Wesolowski #include <sys/pg.h>
1539ab5bb018SKeith M Wesolowski #include <sys/fp.h>
1540ab5bb018SKeith M Wesolowski #include <sys/controlregs.h>
1541ab5bb018SKeith M Wesolowski #include <sys/bitmap.h>
1542ab5bb018SKeith M Wesolowski #include <sys/auxv_386.h>
1543ab5bb018SKeith M Wesolowski #include <sys/memnode.h>
1544ab5bb018SKeith M Wesolowski #include <sys/pci_cfgspace.h>
1545ab5bb018SKeith M Wesolowski #include <sys/comm_page.h>
1546ab5bb018SKeith M Wesolowski #include <sys/mach_mmu.h>
1547ab5bb018SKeith M Wesolowski #include <sys/ucode.h>
1548ab5bb018SKeith M Wesolowski #include <sys/tsc.h>
1549ab5bb018SKeith M Wesolowski #include <sys/kobj.h>
1550ab5bb018SKeith M Wesolowski #include <sys/asm_misc.h>
1551dd23d762SRobert Mustacchi #include <sys/bitmap.h>
1552ab5bb018SKeith M Wesolowski 
1553ab5bb018SKeith M Wesolowski #ifdef __xpv
1554ab5bb018SKeith M Wesolowski #include <sys/hypervisor.h>
1555ab5bb018SKeith M Wesolowski #else
1556ab5bb018SKeith M Wesolowski #include <sys/ontrap.h>
1557ab5bb018SKeith M Wesolowski #endif
1558ab5bb018SKeith M Wesolowski 
1559ab5bb018SKeith M Wesolowski uint_t x86_vendor = X86_VENDOR_IntelClone;
1560ab5bb018SKeith M Wesolowski uint_t x86_type = X86_TYPE_OTHER;
1561ab5bb018SKeith M Wesolowski uint_t x86_clflush_size = 0;
1562ab5bb018SKeith M Wesolowski 
1563ab5bb018SKeith M Wesolowski #if defined(__xpv)
1564ab5bb018SKeith M Wesolowski int x86_use_pcid = 0;
1565ab5bb018SKeith M Wesolowski int x86_use_invpcid = 0;
1566ab5bb018SKeith M Wesolowski #else
1567ab5bb018SKeith M Wesolowski int x86_use_pcid = -1;
1568ab5bb018SKeith M Wesolowski int x86_use_invpcid = -1;
1569ab5bb018SKeith M Wesolowski #endif
1570ab5bb018SKeith M Wesolowski 
1571ab5bb018SKeith M Wesolowski typedef enum {
1572ab5bb018SKeith M Wesolowski 	X86_SPECTREV2_RETPOLINE,
1573ab5bb018SKeith M Wesolowski 	X86_SPECTREV2_ENHANCED_IBRS,
1574651a12cbSRobert Mustacchi 	X86_SPECTREV2_AUTO_IBRS,
1575ab5bb018SKeith M Wesolowski 	X86_SPECTREV2_DISABLED
1576ab5bb018SKeith M Wesolowski } x86_spectrev2_mitigation_t;
1577ab5bb018SKeith M Wesolowski 
1578ab5bb018SKeith M Wesolowski uint_t x86_disable_spectrev2 = 0;
1579ab5bb018SKeith M Wesolowski static x86_spectrev2_mitigation_t x86_spectrev2_mitigation =
1580ab5bb018SKeith M Wesolowski     X86_SPECTREV2_RETPOLINE;
1581ab5bb018SKeith M Wesolowski 
1582ab5bb018SKeith M Wesolowski /*
1583ab5bb018SKeith M Wesolowski  * The mitigation status for TAA:
1584ab5bb018SKeith M Wesolowski  * X86_TAA_NOTHING -- no mitigation available for TAA side-channels
1585ab5bb018SKeith M Wesolowski  * X86_TAA_DISABLED -- mitigation disabled via x86_disable_taa
1586ab5bb018SKeith M Wesolowski  * X86_TAA_MD_CLEAR -- MDS mitigation also suffices for TAA
1587ab5bb018SKeith M Wesolowski  * X86_TAA_TSX_FORCE_ABORT -- transactions are forced to abort
1588ab5bb018SKeith M Wesolowski  * X86_TAA_TSX_DISABLE -- force abort transactions and hide from CPUID
1589ab5bb018SKeith M Wesolowski  * X86_TAA_HW_MITIGATED -- TSX potentially active but H/W not TAA-vulnerable
1590ab5bb018SKeith M Wesolowski  */
1591ab5bb018SKeith M Wesolowski typedef enum {
1592ab5bb018SKeith M Wesolowski 	X86_TAA_NOTHING,
1593ab5bb018SKeith M Wesolowski 	X86_TAA_DISABLED,
1594ab5bb018SKeith M Wesolowski 	X86_TAA_MD_CLEAR,
1595ab5bb018SKeith M Wesolowski 	X86_TAA_TSX_FORCE_ABORT,
1596ab5bb018SKeith M Wesolowski 	X86_TAA_TSX_DISABLE,
1597ab5bb018SKeith M Wesolowski 	X86_TAA_HW_MITIGATED
1598ab5bb018SKeith M Wesolowski } x86_taa_mitigation_t;
1599ab5bb018SKeith M Wesolowski 
1600ab5bb018SKeith M Wesolowski uint_t x86_disable_taa = 0;
1601ab5bb018SKeith M Wesolowski static x86_taa_mitigation_t x86_taa_mitigation = X86_TAA_NOTHING;
1602ab5bb018SKeith M Wesolowski 
1603ab5bb018SKeith M Wesolowski uint_t pentiumpro_bug4046376;
1604ab5bb018SKeith M Wesolowski 
1605ab5bb018SKeith M Wesolowski uchar_t x86_featureset[BT_SIZEOFMAP(NUM_X86_FEATURES)];
1606ab5bb018SKeith M Wesolowski 
1607ab5bb018SKeith M Wesolowski static char *x86_feature_names[NUM_X86_FEATURES] = {
1608ab5bb018SKeith M Wesolowski 	"lgpg",
1609ab5bb018SKeith M Wesolowski 	"tsc",
1610ab5bb018SKeith M Wesolowski 	"msr",
1611ab5bb018SKeith M Wesolowski 	"mtrr",
1612ab5bb018SKeith M Wesolowski 	"pge",
1613ab5bb018SKeith M Wesolowski 	"de",
1614ab5bb018SKeith M Wesolowski 	"cmov",
1615ab5bb018SKeith M Wesolowski 	"mmx",
1616ab5bb018SKeith M Wesolowski 	"mca",
1617ab5bb018SKeith M Wesolowski 	"pae",
1618ab5bb018SKeith M Wesolowski 	"cv8",
1619ab5bb018SKeith M Wesolowski 	"pat",
1620ab5bb018SKeith M Wesolowski 	"sep",
1621ab5bb018SKeith M Wesolowski 	"sse",
1622ab5bb018SKeith M Wesolowski 	"sse2",
1623ab5bb018SKeith M Wesolowski 	"htt",
1624ab5bb018SKeith M Wesolowski 	"asysc",
1625ab5bb018SKeith M Wesolowski 	"nx",
1626ab5bb018SKeith M Wesolowski 	"sse3",
1627ab5bb018SKeith M Wesolowski 	"cx16",
1628ab5bb018SKeith M Wesolowski 	"cmp",
1629ab5bb018SKeith M Wesolowski 	"tscp",
1630ab5bb018SKeith M Wesolowski 	"mwait",
1631ab5bb018SKeith M Wesolowski 	"sse4a",
1632ab5bb018SKeith M Wesolowski 	"cpuid",
1633ab5bb018SKeith M Wesolowski 	"ssse3",
1634ab5bb018SKeith M Wesolowski 	"sse4_1",
1635ab5bb018SKeith M Wesolowski 	"sse4_2",
1636ab5bb018SKeith M Wesolowski 	"1gpg",
1637ab5bb018SKeith M Wesolowski 	"clfsh",
1638ab5bb018SKeith M Wesolowski 	"64",
1639ab5bb018SKeith M Wesolowski 	"aes",
1640ab5bb018SKeith M Wesolowski 	"pclmulqdq",
1641ab5bb018SKeith M Wesolowski 	"xsave",
1642ab5bb018SKeith M Wesolowski 	"avx",
1643ab5bb018SKeith M Wesolowski 	"vmx",
1644ab5bb018SKeith M Wesolowski 	"svm",
1645ab5bb018SKeith M Wesolowski 	"topoext",
1646ab5bb018SKeith M Wesolowski 	"f16c",
1647ab5bb018SKeith M Wesolowski 	"rdrand",
1648ab5bb018SKeith M Wesolowski 	"x2apic",
1649ab5bb018SKeith M Wesolowski 	"avx2",
1650ab5bb018SKeith M Wesolowski 	"bmi1",
1651ab5bb018SKeith M Wesolowski 	"bmi2",
1652ab5bb018SKeith M Wesolowski 	"fma",
1653ab5bb018SKeith M Wesolowski 	"smep",
1654ab5bb018SKeith M Wesolowski 	"smap",
1655ab5bb018SKeith M Wesolowski 	"adx",
1656ab5bb018SKeith M Wesolowski 	"rdseed",
1657ab5bb018SKeith M Wesolowski 	"mpx",
1658ab5bb018SKeith M Wesolowski 	"avx512f",
1659ab5bb018SKeith M Wesolowski 	"avx512dq",
1660ab5bb018SKeith M Wesolowski 	"avx512pf",
1661ab5bb018SKeith M Wesolowski 	"avx512er",
1662ab5bb018SKeith M Wesolowski 	"avx512cd",
1663ab5bb018SKeith M Wesolowski 	"avx512bw",
1664ab5bb018SKeith M Wesolowski 	"avx512vl",
1665ab5bb018SKeith M Wesolowski 	"avx512fma",
1666ab5bb018SKeith M Wesolowski 	"avx512vbmi",
1667ab5bb018SKeith M Wesolowski 	"avx512_vpopcntdq",
1668ab5bb018SKeith M Wesolowski 	"avx512_4vnniw",
1669ab5bb018SKeith M Wesolowski 	"avx512_4fmaps",
1670ab5bb018SKeith M Wesolowski 	"xsaveopt",
1671ab5bb018SKeith M Wesolowski 	"xsavec",
1672ab5bb018SKeith M Wesolowski 	"xsaves",
1673ab5bb018SKeith M Wesolowski 	"sha",
1674ab5bb018SKeith M Wesolowski 	"umip",
1675ab5bb018SKeith M Wesolowski 	"pku",
1676ab5bb018SKeith M Wesolowski 	"ospke",
1677ab5bb018SKeith M Wesolowski 	"pcid",
1678ab5bb018SKeith M Wesolowski 	"invpcid",
1679ab5bb018SKeith M Wesolowski 	"ibrs",
1680ab5bb018SKeith M Wesolowski 	"ibpb",
1681ab5bb018SKeith M Wesolowski 	"stibp",
1682ab5bb018SKeith M Wesolowski 	"ssbd",
1683ab5bb018SKeith M Wesolowski 	"ssbd_virt",
1684ab5bb018SKeith M Wesolowski 	"rdcl_no",
1685ab5bb018SKeith M Wesolowski 	"ibrs_all",
1686ab5bb018SKeith M Wesolowski 	"rsba",
1687ab5bb018SKeith M Wesolowski 	"ssb_no",
1688ab5bb018SKeith M Wesolowski 	"stibp_all",
1689ab5bb018SKeith M Wesolowski 	"flush_cmd",
1690ab5bb018SKeith M Wesolowski 	"l1d_vmentry_no",
1691ab5bb018SKeith M Wesolowski 	"fsgsbase",
1692ab5bb018SKeith M Wesolowski 	"clflushopt",
1693ab5bb018SKeith M Wesolowski 	"clwb",
1694ab5bb018SKeith M Wesolowski 	"monitorx",
1695ab5bb018SKeith M Wesolowski 	"clzero",
1696ab5bb018SKeith M Wesolowski 	"xop",
1697ab5bb018SKeith M Wesolowski 	"fma4",
1698ab5bb018SKeith M Wesolowski 	"tbm",
1699ab5bb018SKeith M Wesolowski 	"avx512_vnni",
1700ab5bb018SKeith M Wesolowski 	"amd_pcec",
1701ab5bb018SKeith M Wesolowski 	"md_clear",
1702ab5bb018SKeith M Wesolowski 	"mds_no",
1703ab5bb018SKeith M Wesolowski 	"core_thermal",
1704ab5bb018SKeith M Wesolowski 	"pkg_thermal",
1705ab5bb018SKeith M Wesolowski 	"tsx_ctrl",
1706ab5bb018SKeith M Wesolowski 	"taa_no",
1707ab5bb018SKeith M Wesolowski 	"ppin",
1708ab5bb018SKeith M Wesolowski 	"vaes",
1709ab5bb018SKeith M Wesolowski 	"vpclmulqdq",
171056726c7eSRobert Mustacchi 	"lfence_serializing",
171156726c7eSRobert Mustacchi 	"gfni",
171256726c7eSRobert Mustacchi 	"avx512_vp2intersect",
171356726c7eSRobert Mustacchi 	"avx512_bitalg",
171456726c7eSRobert Mustacchi 	"avx512_vbmi2",
1715651a12cbSRobert Mustacchi 	"avx512_bf16",
17165cd084edSDan McDonald 	"auto_ibrs",
17175cd084edSDan McDonald 	"rfds_no",
1718a6e309baSDan McDonald 	"rfds_clear",
17195a9c36deSDan McDonald 	"pbrsb_no",
17205a9c36deSDan McDonald 	"bhi_no",
17215a9c36deSDan McDonald 	"bhi_clear"
1722ab5bb018SKeith M Wesolowski };
1723ab5bb018SKeith M Wesolowski 
1724ab5bb018SKeith M Wesolowski boolean_t
is_x86_feature(void * featureset,uint_t feature)1725ab5bb018SKeith M Wesolowski is_x86_feature(void *featureset, uint_t feature)
1726ab5bb018SKeith M Wesolowski {
1727ab5bb018SKeith M Wesolowski 	ASSERT(feature < NUM_X86_FEATURES);
1728ab5bb018SKeith M Wesolowski 	return (BT_TEST((ulong_t *)featureset, feature));
1729ab5bb018SKeith M Wesolowski }
1730ab5bb018SKeith M Wesolowski 
1731ab5bb018SKeith M Wesolowski void
add_x86_feature(void * featureset,uint_t feature)1732ab5bb018SKeith M Wesolowski add_x86_feature(void *featureset, uint_t feature)
1733ab5bb018SKeith M Wesolowski {
1734ab5bb018SKeith M Wesolowski 	ASSERT(feature < NUM_X86_FEATURES);
1735ab5bb018SKeith M Wesolowski 	BT_SET((ulong_t *)featureset, feature);
1736ab5bb018SKeith M Wesolowski }
1737ab5bb018SKeith M Wesolowski 
1738ab5bb018SKeith M Wesolowski void
remove_x86_feature(void * featureset,uint_t feature)1739ab5bb018SKeith M Wesolowski remove_x86_feature(void *featureset, uint_t feature)
1740ab5bb018SKeith M Wesolowski {
1741ab5bb018SKeith M Wesolowski 	ASSERT(feature < NUM_X86_FEATURES);
1742ab5bb018SKeith M Wesolowski 	BT_CLEAR((ulong_t *)featureset, feature);
1743ab5bb018SKeith M Wesolowski }
1744ab5bb018SKeith M Wesolowski 
1745ab5bb018SKeith M Wesolowski boolean_t
compare_x86_featureset(void * setA,void * setB)1746ab5bb018SKeith M Wesolowski compare_x86_featureset(void *setA, void *setB)
1747ab5bb018SKeith M Wesolowski {
1748ab5bb018SKeith M Wesolowski 	/*
1749ab5bb018SKeith M Wesolowski 	 * We assume that the unused bits of the bitmap are always zero.
1750ab5bb018SKeith M Wesolowski 	 */
1751ab5bb018SKeith M Wesolowski 	if (memcmp(setA, setB, BT_SIZEOFMAP(NUM_X86_FEATURES)) == 0) {
1752ab5bb018SKeith M Wesolowski 		return (B_TRUE);
1753ab5bb018SKeith M Wesolowski 	} else {
1754ab5bb018SKeith M Wesolowski 		return (B_FALSE);
1755ab5bb018SKeith M Wesolowski 	}
1756ab5bb018SKeith M Wesolowski }
1757ab5bb018SKeith M Wesolowski 
1758ab5bb018SKeith M Wesolowski void
print_x86_featureset(void * featureset)1759ab5bb018SKeith M Wesolowski print_x86_featureset(void *featureset)
1760ab5bb018SKeith M Wesolowski {
1761ab5bb018SKeith M Wesolowski 	uint_t i;
1762ab5bb018SKeith M Wesolowski 
1763ab5bb018SKeith M Wesolowski 	for (i = 0; i < NUM_X86_FEATURES; i++) {
1764ab5bb018SKeith M Wesolowski 		if (is_x86_feature(featureset, i)) {
1765ab5bb018SKeith M Wesolowski 			cmn_err(CE_CONT, "?x86_feature: %s\n",
1766ab5bb018SKeith M Wesolowski 			    x86_feature_names[i]);
1767ab5bb018SKeith M Wesolowski 		}
1768ab5bb018SKeith M Wesolowski 	}
1769ab5bb018SKeith M Wesolowski }
1770ab5bb018SKeith M Wesolowski 
1771ab5bb018SKeith M Wesolowski /* Note: This is the maximum size for the CPU, not the size of the structure. */
1772ab5bb018SKeith M Wesolowski static size_t xsave_state_size = 0;
1773ab5bb018SKeith M Wesolowski uint64_t xsave_bv_all = (XFEATURE_LEGACY_FP | XFEATURE_SSE);
1774ab5bb018SKeith M Wesolowski boolean_t xsave_force_disable = B_FALSE;
1775ab5bb018SKeith M Wesolowski extern int disable_smap;
1776ab5bb018SKeith M Wesolowski 
1777ab5bb018SKeith M Wesolowski /*
1778ab5bb018SKeith M Wesolowski  * This is set to platform type we are running on.
1779ab5bb018SKeith M Wesolowski  */
1780ab5bb018SKeith M Wesolowski static int platform_type = -1;
1781ab5bb018SKeith M Wesolowski 
1782ab5bb018SKeith M Wesolowski #if !defined(__xpv)
1783ab5bb018SKeith M Wesolowski /*
1784ab5bb018SKeith M Wesolowski  * Variable to patch if hypervisor platform detection needs to be
1785ab5bb018SKeith M Wesolowski  * disabled (e.g. platform_type will always be HW_NATIVE if this is 0).
1786ab5bb018SKeith M Wesolowski  */
1787ab5bb018SKeith M Wesolowski int enable_platform_detection = 1;
1788ab5bb018SKeith M Wesolowski #endif
1789ab5bb018SKeith M Wesolowski 
1790ab5bb018SKeith M Wesolowski /*
1791ab5bb018SKeith M Wesolowski  * monitor/mwait info.
1792ab5bb018SKeith M Wesolowski  *
1793ab5bb018SKeith M Wesolowski  * size_actual and buf_actual are the real address and size allocated to get
1794ab5bb018SKeith M Wesolowski  * proper mwait_buf alignement.  buf_actual and size_actual should be passed
1795ab5bb018SKeith M Wesolowski  * to kmem_free().  Currently kmem_alloc() and mwait happen to both use
1796ab5bb018SKeith M Wesolowski  * processor cache-line alignment, but this is not guarantied in the furture.
1797ab5bb018SKeith M Wesolowski  */
1798ab5bb018SKeith M Wesolowski struct mwait_info {
1799ab5bb018SKeith M Wesolowski 	size_t		mon_min;	/* min size to avoid missed wakeups */
1800ab5bb018SKeith M Wesolowski 	size_t		mon_max;	/* size to avoid false wakeups */
1801ab5bb018SKeith M Wesolowski 	size_t		size_actual;	/* size actually allocated */
1802ab5bb018SKeith M Wesolowski 	void		*buf_actual;	/* memory actually allocated */
1803ab5bb018SKeith M Wesolowski 	uint32_t	support;	/* processor support of monitor/mwait */
1804ab5bb018SKeith M Wesolowski };
1805ab5bb018SKeith M Wesolowski 
1806ab5bb018SKeith M Wesolowski /*
1807ab5bb018SKeith M Wesolowski  * xsave/xrestor info.
1808ab5bb018SKeith M Wesolowski  *
1809ab5bb018SKeith M Wesolowski  * This structure contains HW feature bits and the size of the xsave save area.
1810ab5bb018SKeith M Wesolowski  * Note: the kernel declares a fixed size (AVX_XSAVE_SIZE) structure
1811ab5bb018SKeith M Wesolowski  * (xsave_state) to describe the xsave layout. However, at runtime the
1812ab5bb018SKeith M Wesolowski  * per-lwp xsave area is dynamically allocated based on xsav_max_size. The
1813ab5bb018SKeith M Wesolowski  * xsave_state structure simply represents the legacy layout of the beginning
1814ab5bb018SKeith M Wesolowski  * of the xsave area.
1815ab5bb018SKeith M Wesolowski  */
1816ab5bb018SKeith M Wesolowski struct xsave_info {
1817ab5bb018SKeith M Wesolowski 	uint32_t	xsav_hw_features_low;   /* Supported HW features */
1818ab5bb018SKeith M Wesolowski 	uint32_t	xsav_hw_features_high;  /* Supported HW features */
1819ab5bb018SKeith M Wesolowski 	size_t		xsav_max_size;  /* max size save area for HW features */
1820ab5bb018SKeith M Wesolowski 	size_t		ymm_size;	/* AVX: size of ymm save area */
1821ab5bb018SKeith M Wesolowski 	size_t		ymm_offset;	/* AVX: offset for ymm save area */
1822ab5bb018SKeith M Wesolowski 	size_t		bndregs_size;	/* MPX: size of bndregs save area */
1823ab5bb018SKeith M Wesolowski 	size_t		bndregs_offset;	/* MPX: offset for bndregs save area */
1824ab5bb018SKeith M Wesolowski 	size_t		bndcsr_size;	/* MPX: size of bndcsr save area */
1825ab5bb018SKeith M Wesolowski 	size_t		bndcsr_offset;	/* MPX: offset for bndcsr save area */
1826ab5bb018SKeith M Wesolowski 	size_t		opmask_size;	/* AVX512: size of opmask save */
1827ab5bb018SKeith M Wesolowski 	size_t		opmask_offset;	/* AVX512: offset for opmask save */
1828ab5bb018SKeith M Wesolowski 	size_t		zmmlo_size;	/* AVX512: size of zmm 256 save */
1829ab5bb018SKeith M Wesolowski 	size_t		zmmlo_offset;	/* AVX512: offset for zmm 256 save */
1830ab5bb018SKeith M Wesolowski 	size_t		zmmhi_size;	/* AVX512: size of zmm hi reg save */
1831ab5bb018SKeith M Wesolowski 	size_t		zmmhi_offset;	/* AVX512: offset for zmm hi reg save */
1832ab5bb018SKeith M Wesolowski };
1833ab5bb018SKeith M Wesolowski 
1834ab5bb018SKeith M Wesolowski 
1835ab5bb018SKeith M Wesolowski /*
1836ab5bb018SKeith M Wesolowski  * These constants determine how many of the elements of the
1837ab5bb018SKeith M Wesolowski  * cpuid we cache in the cpuid_info data structure; the
1838ab5bb018SKeith M Wesolowski  * remaining elements are accessible via the cpuid instruction.
1839ab5bb018SKeith M Wesolowski  */
1840ab5bb018SKeith M Wesolowski 
1841ab5bb018SKeith M Wesolowski #define	NMAX_CPI_STD	8		/* eax = 0 .. 7 */
1842651a12cbSRobert Mustacchi #define	NMAX_CPI_EXTD	0x22		/* eax = 0x80000000 .. 0x80000021 */
1843dd23d762SRobert Mustacchi #define	NMAX_CPI_TOPO	0x10		/* Sanity check on leaf 8X26, 1F */
1844ab5bb018SKeith M Wesolowski 
1845ab5bb018SKeith M Wesolowski /*
1846ab5bb018SKeith M Wesolowski  * See the big theory statement for a more detailed explanation of what some of
1847ab5bb018SKeith M Wesolowski  * these members mean.
1848ab5bb018SKeith M Wesolowski  */
1849ab5bb018SKeith M Wesolowski struct cpuid_info {
1850ab5bb018SKeith M Wesolowski 	uint_t cpi_pass;		/* last pass completed */
1851ab5bb018SKeith M Wesolowski 	/*
1852ab5bb018SKeith M Wesolowski 	 * standard function information
1853ab5bb018SKeith M Wesolowski 	 */
1854ab5bb018SKeith M Wesolowski 	uint_t cpi_maxeax;		/* fn 0: %eax */
1855ab5bb018SKeith M Wesolowski 	char cpi_vendorstr[13];		/* fn 0: %ebx:%ecx:%edx */
1856ab5bb018SKeith M Wesolowski 	uint_t cpi_vendor;		/* enum of cpi_vendorstr */
1857ab5bb018SKeith M Wesolowski 
1858ab5bb018SKeith M Wesolowski 	uint_t cpi_family;		/* fn 1: extended family */
1859ab5bb018SKeith M Wesolowski 	uint_t cpi_model;		/* fn 1: extended model */
1860ab5bb018SKeith M Wesolowski 	uint_t cpi_step;		/* fn 1: stepping */
1861ab5bb018SKeith M Wesolowski 	chipid_t cpi_chipid;		/* fn 1: %ebx:  Intel: chip # */
1862ab5bb018SKeith M Wesolowski 					/*		AMD: package/socket # */
1863ab5bb018SKeith M Wesolowski 	uint_t cpi_brandid;		/* fn 1: %ebx: brand ID */
1864ab5bb018SKeith M Wesolowski 	int cpi_clogid;			/* fn 1: %ebx: thread # */
1865ab5bb018SKeith M Wesolowski 	uint_t cpi_ncpu_per_chip;	/* fn 1: %ebx: logical cpu count */
1866ab5bb018SKeith M Wesolowski 	uint8_t cpi_cacheinfo[16];	/* fn 2: intel-style cache desc */
1867ab5bb018SKeith M Wesolowski 	uint_t cpi_ncache;		/* fn 2: number of elements */
1868ab5bb018SKeith M Wesolowski 	uint_t cpi_ncpu_shr_last_cache;	/* fn 4: %eax: ncpus sharing cache */
1869ab5bb018SKeith M Wesolowski 	id_t cpi_last_lvl_cacheid;	/* fn 4: %eax: derived cache id */
1870ab5bb018SKeith M Wesolowski 	uint_t cpi_cache_leaf_size;	/* Number of cache elements */
1871ab5bb018SKeith M Wesolowski 					/* Intel fn: 4, AMD fn: 8000001d */
1872dd23d762SRobert Mustacchi 	struct cpuid_regs **cpi_cache_leaves;	/* Actual leaves from above */
1873ab5bb018SKeith M Wesolowski 	struct cpuid_regs cpi_std[NMAX_CPI_STD];	/* 0 .. 7 */
18745a9c36deSDan McDonald 	struct cpuid_regs cpi_sub7[2];	/* Leaf 7, sub-leaves 1-2 */
1875ab5bb018SKeith M Wesolowski 	/*
1876ab5bb018SKeith M Wesolowski 	 * extended function information
1877ab5bb018SKeith M Wesolowski 	 */
1878ab5bb018SKeith M Wesolowski 	uint_t cpi_xmaxeax;		/* fn 0x80000000: %eax */
1879ab5bb018SKeith M Wesolowski 	char cpi_brandstr[49];		/* fn 0x8000000[234] */
1880ab5bb018SKeith M Wesolowski 	uint8_t cpi_pabits;		/* fn 0x80000006: %eax */
1881ab5bb018SKeith M Wesolowski 	uint8_t	cpi_vabits;		/* fn 0x80000006: %eax */
1882ab5bb018SKeith M Wesolowski 	uint8_t cpi_fp_amd_save;	/* AMD: FP error pointer save rqd. */
1883ab5bb018SKeith M Wesolowski 	struct	cpuid_regs cpi_extd[NMAX_CPI_EXTD];	/* 0x800000XX */
1884ab5bb018SKeith M Wesolowski 
1885ab5bb018SKeith M Wesolowski 	id_t cpi_coreid;		/* same coreid => strands share core */
1886ab5bb018SKeith M Wesolowski 	int cpi_pkgcoreid;		/* core number within single package */
1887ab5bb018SKeith M Wesolowski 	uint_t cpi_ncore_per_chip;	/* AMD: fn 0x80000008: %ecx[7-0] */
1888ab5bb018SKeith M Wesolowski 					/* Intel: fn 4: %eax[31-26] */
1889ab5bb018SKeith M Wesolowski 
1890ab5bb018SKeith M Wesolowski 	/*
1891ab5bb018SKeith M Wesolowski 	 * These values represent the number of bits that are required to store
1892ab5bb018SKeith M Wesolowski 	 * information about the number of cores and threads.
1893ab5bb018SKeith M Wesolowski 	 */
1894ab5bb018SKeith M Wesolowski 	uint_t cpi_ncore_bits;
1895ab5bb018SKeith M Wesolowski 	uint_t cpi_nthread_bits;
1896ab5bb018SKeith M Wesolowski 	/*
1897ab5bb018SKeith M Wesolowski 	 * supported feature information
1898ab5bb018SKeith M Wesolowski 	 */
1899ab5bb018SKeith M Wesolowski 	uint32_t cpi_support[6];
1900ab5bb018SKeith M Wesolowski #define	STD_EDX_FEATURES	0
1901ab5bb018SKeith M Wesolowski #define	AMD_EDX_FEATURES	1
1902ab5bb018SKeith M Wesolowski #define	TM_EDX_FEATURES		2
1903ab5bb018SKeith M Wesolowski #define	STD_ECX_FEATURES	3
1904ab5bb018SKeith M Wesolowski #define	AMD_ECX_FEATURES	4
1905ab5bb018SKeith M Wesolowski #define	STD_EBX_FEATURES	5
1906ab5bb018SKeith M Wesolowski 	/*
1907ab5bb018SKeith M Wesolowski 	 * Synthesized information, where known.
1908ab5bb018SKeith M Wesolowski 	 */
190922e4c3acSKeith M Wesolowski 	x86_chiprev_t cpi_chiprev;	/* See X86_CHIPREV_* in x86_archext.h */
1910ab5bb018SKeith M Wesolowski 	const char *cpi_chiprevstr;	/* May be NULL if chiprev unknown */
1911ab5bb018SKeith M Wesolowski 	uint32_t cpi_socket;		/* Chip package/socket type */
191222e4c3acSKeith M Wesolowski 	x86_uarchrev_t cpi_uarchrev;	/* Microarchitecture and revision */
1913ab5bb018SKeith M Wesolowski 
1914ab5bb018SKeith M Wesolowski 	struct mwait_info cpi_mwait;	/* fn 5: monitor/mwait info */
1915ab5bb018SKeith M Wesolowski 	uint32_t cpi_apicid;
1916ab5bb018SKeith M Wesolowski 	uint_t cpi_procnodeid;		/* AMD: nodeID on HT, Intel: chipid */
1917ab5bb018SKeith M Wesolowski 	uint_t cpi_procnodes_per_pkg;	/* AMD: # of nodes in the package */
1918ab5bb018SKeith M Wesolowski 					/* Intel: 1 */
1919ab5bb018SKeith M Wesolowski 	uint_t cpi_compunitid;		/* AMD: ComputeUnit ID, Intel: coreid */
1920ab5bb018SKeith M Wesolowski 	uint_t cpi_cores_per_compunit;	/* AMD: # of cores in the ComputeUnit */
1921ab5bb018SKeith M Wesolowski 
1922ab5bb018SKeith M Wesolowski 	struct xsave_info cpi_xsave;	/* fn D: xsave/xrestor info */
1923dd23d762SRobert Mustacchi 
1924dd23d762SRobert Mustacchi 	/*
1925dd23d762SRobert Mustacchi 	 * AMD and Intel extended topology information. Leaf 8X26 (AMD) and
1926dd23d762SRobert Mustacchi 	 * eventually leaf 0x1F (Intel).
1927dd23d762SRobert Mustacchi 	 */
1928dd23d762SRobert Mustacchi 	uint_t cpi_topo_nleaves;
1929dd23d762SRobert Mustacchi 	struct cpuid_regs cpi_topo[NMAX_CPI_TOPO];
1930ab5bb018SKeith M Wesolowski };
1931ab5bb018SKeith M Wesolowski 
1932ab5bb018SKeith M Wesolowski 
1933ab5bb018SKeith M Wesolowski static struct cpuid_info cpuid_info0;
1934ab5bb018SKeith M Wesolowski 
1935ab5bb018SKeith M Wesolowski /*
1936ab5bb018SKeith M Wesolowski  * These bit fields are defined by the Intel Application Note AP-485
1937ab5bb018SKeith M Wesolowski  * "Intel Processor Identification and the CPUID Instruction"
1938ab5bb018SKeith M Wesolowski  */
1939ab5bb018SKeith M Wesolowski #define	CPI_FAMILY_XTD(cpi)	BITX((cpi)->cpi_std[1].cp_eax, 27, 20)
1940ab5bb018SKeith M Wesolowski #define	CPI_MODEL_XTD(cpi)	BITX((cpi)->cpi_std[1].cp_eax, 19, 16)
1941ab5bb018SKeith M Wesolowski #define	CPI_TYPE(cpi)		BITX((cpi)->cpi_std[1].cp_eax, 13, 12)
1942ab5bb018SKeith M Wesolowski #define	CPI_FAMILY(cpi)		BITX((cpi)->cpi_std[1].cp_eax, 11, 8)
1943ab5bb018SKeith M Wesolowski #define	CPI_STEP(cpi)		BITX((cpi)->cpi_std[1].cp_eax, 3, 0)
1944ab5bb018SKeith M Wesolowski #define	CPI_MODEL(cpi)		BITX((cpi)->cpi_std[1].cp_eax, 7, 4)
1945ab5bb018SKeith M Wesolowski 
1946ab5bb018SKeith M Wesolowski #define	CPI_FEATURES_EDX(cpi)		((cpi)->cpi_std[1].cp_edx)
1947ab5bb018SKeith M Wesolowski #define	CPI_FEATURES_ECX(cpi)		((cpi)->cpi_std[1].cp_ecx)
1948ab5bb018SKeith M Wesolowski #define	CPI_FEATURES_XTD_EDX(cpi)	((cpi)->cpi_extd[1].cp_edx)
1949ab5bb018SKeith M Wesolowski #define	CPI_FEATURES_XTD_ECX(cpi)	((cpi)->cpi_extd[1].cp_ecx)
1950ab5bb018SKeith M Wesolowski #define	CPI_FEATURES_7_0_EBX(cpi)	((cpi)->cpi_std[7].cp_ebx)
1951ab5bb018SKeith M Wesolowski #define	CPI_FEATURES_7_0_ECX(cpi)	((cpi)->cpi_std[7].cp_ecx)
1952ab5bb018SKeith M Wesolowski #define	CPI_FEATURES_7_0_EDX(cpi)	((cpi)->cpi_std[7].cp_edx)
195356726c7eSRobert Mustacchi #define	CPI_FEATURES_7_1_EAX(cpi)	((cpi)->cpi_sub7[0].cp_eax)
19545a9c36deSDan McDonald #define	CPI_FEATURES_7_2_EDX(cpi)	((cpi)->cpi_sub7[1].cp_edx)
1955ab5bb018SKeith M Wesolowski 
1956ab5bb018SKeith M Wesolowski #define	CPI_BRANDID(cpi)	BITX((cpi)->cpi_std[1].cp_ebx, 7, 0)
1957ab5bb018SKeith M Wesolowski #define	CPI_CHUNKS(cpi)		BITX((cpi)->cpi_std[1].cp_ebx, 15, 7)
1958ab5bb018SKeith M Wesolowski #define	CPI_CPU_COUNT(cpi)	BITX((cpi)->cpi_std[1].cp_ebx, 23, 16)
1959ab5bb018SKeith M Wesolowski #define	CPI_APIC_ID(cpi)	BITX((cpi)->cpi_std[1].cp_ebx, 31, 24)
1960ab5bb018SKeith M Wesolowski 
1961ab5bb018SKeith M Wesolowski #define	CPI_MAXEAX_MAX		0x100		/* sanity control */
1962ab5bb018SKeith M Wesolowski #define	CPI_XMAXEAX_MAX		0x80000100
1963ab5bb018SKeith M Wesolowski #define	CPI_FN4_ECX_MAX		0x20		/* sanity: max fn 4 levels */
1964ab5bb018SKeith M Wesolowski #define	CPI_FNB_ECX_MAX		0x20		/* sanity: max fn B levels */
1965ab5bb018SKeith M Wesolowski 
1966ab5bb018SKeith M Wesolowski /*
1967ab5bb018SKeith M Wesolowski  * Function 4 (Deterministic Cache Parameters) macros
1968ab5bb018SKeith M Wesolowski  * Defined by Intel Application Note AP-485
1969ab5bb018SKeith M Wesolowski  */
1970ab5bb018SKeith M Wesolowski #define	CPI_NUM_CORES(regs)		BITX((regs)->cp_eax, 31, 26)
1971ab5bb018SKeith M Wesolowski #define	CPI_NTHR_SHR_CACHE(regs)	BITX((regs)->cp_eax, 25, 14)
1972ab5bb018SKeith M Wesolowski #define	CPI_FULL_ASSOC_CACHE(regs)	BITX((regs)->cp_eax, 9, 9)
1973ab5bb018SKeith M Wesolowski #define	CPI_SELF_INIT_CACHE(regs)	BITX((regs)->cp_eax, 8, 8)
1974ab5bb018SKeith M Wesolowski #define	CPI_CACHE_LVL(regs)		BITX((regs)->cp_eax, 7, 5)
1975ab5bb018SKeith M Wesolowski #define	CPI_CACHE_TYPE(regs)		BITX((regs)->cp_eax, 4, 0)
1976dd23d762SRobert Mustacchi #define	CPI_CACHE_TYPE_DONE	0
1977dd23d762SRobert Mustacchi #define	CPI_CACHE_TYPE_DATA	1
1978dd23d762SRobert Mustacchi #define	CPI_CACHE_TYPE_INSTR	2
1979dd23d762SRobert Mustacchi #define	CPI_CACHE_TYPE_UNIFIED	3
1980ab5bb018SKeith M Wesolowski #define	CPI_CPU_LEVEL_TYPE(regs)	BITX((regs)->cp_ecx, 15, 8)
1981ab5bb018SKeith M Wesolowski 
1982ab5bb018SKeith M Wesolowski #define	CPI_CACHE_WAYS(regs)		BITX((regs)->cp_ebx, 31, 22)
1983ab5bb018SKeith M Wesolowski #define	CPI_CACHE_PARTS(regs)		BITX((regs)->cp_ebx, 21, 12)
1984ab5bb018SKeith M Wesolowski #define	CPI_CACHE_COH_LN_SZ(regs)	BITX((regs)->cp_ebx, 11, 0)
1985ab5bb018SKeith M Wesolowski 
1986ab5bb018SKeith M Wesolowski #define	CPI_CACHE_SETS(regs)		BITX((regs)->cp_ecx, 31, 0)
1987ab5bb018SKeith M Wesolowski 
1988ab5bb018SKeith M Wesolowski #define	CPI_PREFCH_STRIDE(regs)		BITX((regs)->cp_edx, 9, 0)
1989ab5bb018SKeith M Wesolowski 
1990ab5bb018SKeith M Wesolowski 
1991ab5bb018SKeith M Wesolowski /*
1992ab5bb018SKeith M Wesolowski  * A couple of shorthand macros to identify "later" P6-family chips
1993ab5bb018SKeith M Wesolowski  * like the Pentium M and Core.  First, the "older" P6-based stuff
1994ab5bb018SKeith M Wesolowski  * (loosely defined as "pre-Pentium-4"):
1995ab5bb018SKeith M Wesolowski  * P6, PII, Mobile PII, PII Xeon, PIII, Mobile PIII, PIII Xeon
1996ab5bb018SKeith M Wesolowski  */
1997ab5bb018SKeith M Wesolowski #define	IS_LEGACY_P6(cpi) (			\
1998ab5bb018SKeith M Wesolowski 	cpi->cpi_family == 6 &&			\
1999ab5bb018SKeith M Wesolowski 		(cpi->cpi_model == 1 ||		\
2000ab5bb018SKeith M Wesolowski 		cpi->cpi_model == 3 ||		\
2001ab5bb018SKeith M Wesolowski 		cpi->cpi_model == 5 ||		\
2002ab5bb018SKeith M Wesolowski 		cpi->cpi_model == 6 ||		\
2003ab5bb018SKeith M Wesolowski 		cpi->cpi_model == 7 ||		\
2004ab5bb018SKeith M Wesolowski 		cpi->cpi_model == 8 ||		\
2005ab5bb018SKeith M Wesolowski 		cpi->cpi_model == 0xA ||	\
2006ab5bb018SKeith M Wesolowski 		cpi->cpi_model == 0xB)		\
2007ab5bb018SKeith M Wesolowski )
2008ab5bb018SKeith M Wesolowski 
2009ab5bb018SKeith M Wesolowski /* A "new F6" is everything with family 6 that's not the above */
2010ab5bb018SKeith M Wesolowski #define	IS_NEW_F6(cpi) ((cpi->cpi_family == 6) && !IS_LEGACY_P6(cpi))
2011ab5bb018SKeith M Wesolowski 
2012ab5bb018SKeith M Wesolowski /* Extended family/model support */
2013ab5bb018SKeith M Wesolowski #define	IS_EXTENDED_MODEL_INTEL(cpi) (cpi->cpi_family == 0x6 || \
2014ab5bb018SKeith M Wesolowski 	cpi->cpi_family >= 0xf)
2015ab5bb018SKeith M Wesolowski 
2016ab5bb018SKeith M Wesolowski /*
2017ab5bb018SKeith M Wesolowski  * Info for monitor/mwait idle loop.
2018ab5bb018SKeith M Wesolowski  *
2019ab5bb018SKeith M Wesolowski  * See cpuid section of "Intel 64 and IA-32 Architectures Software Developer's
2020ab5bb018SKeith M Wesolowski  * Manual Volume 2A: Instruction Set Reference, A-M" #25366-022US, November
2021ab5bb018SKeith M Wesolowski  * 2006.
2022ab5bb018SKeith M Wesolowski  * See MONITOR/MWAIT section of "AMD64 Architecture Programmer's Manual
2023ab5bb018SKeith M Wesolowski  * Documentation Updates" #33633, Rev 2.05, December 2006.
2024ab5bb018SKeith M Wesolowski  */
2025ab5bb018SKeith M Wesolowski #define	MWAIT_SUPPORT		(0x00000001)	/* mwait supported */
2026ab5bb018SKeith M Wesolowski #define	MWAIT_EXTENSIONS	(0x00000002)	/* extenstion supported */
2027ab5bb018SKeith M Wesolowski #define	MWAIT_ECX_INT_ENABLE	(0x00000004)	/* ecx 1 extension supported */
2028ab5bb018SKeith M Wesolowski #define	MWAIT_SUPPORTED(cpi)	((cpi)->cpi_std[1].cp_ecx & CPUID_INTC_ECX_MON)
2029ab5bb018SKeith M Wesolowski #define	MWAIT_INT_ENABLE(cpi)	((cpi)->cpi_std[5].cp_ecx & 0x2)
2030ab5bb018SKeith M Wesolowski #define	MWAIT_EXTENSION(cpi)	((cpi)->cpi_std[5].cp_ecx & 0x1)
2031ab5bb018SKeith M Wesolowski #define	MWAIT_SIZE_MIN(cpi)	BITX((cpi)->cpi_std[5].cp_eax, 15, 0)
2032ab5bb018SKeith M Wesolowski #define	MWAIT_SIZE_MAX(cpi)	BITX((cpi)->cpi_std[5].cp_ebx, 15, 0)
2033ab5bb018SKeith M Wesolowski /*
2034ab5bb018SKeith M Wesolowski  * Number of sub-cstates for a given c-state.
2035ab5bb018SKeith M Wesolowski  */
2036ab5bb018SKeith M Wesolowski #define	MWAIT_NUM_SUBC_STATES(cpi, c_state)			\
2037ab5bb018SKeith M Wesolowski 	BITX((cpi)->cpi_std[5].cp_edx, c_state + 3, c_state)
2038ab5bb018SKeith M Wesolowski 
2039ab5bb018SKeith M Wesolowski /*
2040ab5bb018SKeith M Wesolowski  * XSAVE leaf 0xD enumeration
2041ab5bb018SKeith M Wesolowski  */
2042ab5bb018SKeith M Wesolowski #define	CPUID_LEAFD_2_YMM_OFFSET	576
2043ab5bb018SKeith M Wesolowski #define	CPUID_LEAFD_2_YMM_SIZE		256
2044ab5bb018SKeith M Wesolowski 
2045ab5bb018SKeith M Wesolowski /*
2046ab5bb018SKeith M Wesolowski  * Common extended leaf names to cut down on typos.
2047ab5bb018SKeith M Wesolowski  */
2048ab5bb018SKeith M Wesolowski #define	CPUID_LEAF_EXT_0		0x80000000
2049ab5bb018SKeith M Wesolowski #define	CPUID_LEAF_EXT_8		0x80000008
2050ab5bb018SKeith M Wesolowski #define	CPUID_LEAF_EXT_1d		0x8000001d
2051ab5bb018SKeith M Wesolowski #define	CPUID_LEAF_EXT_1e		0x8000001e
2052651a12cbSRobert Mustacchi #define	CPUID_LEAF_EXT_21		0x80000021
2053dd23d762SRobert Mustacchi #define	CPUID_LEAF_EXT_26		0x80000026
2054ab5bb018SKeith M Wesolowski 
2055ab5bb018SKeith M Wesolowski /*
2056dd23d762SRobert Mustacchi  * Functions we consume from cpuid_subr.c;  don't publish these in a header
2057ab5bb018SKeith M Wesolowski  * file to try and keep people using the expected cpuid_* interfaces.
2058ab5bb018SKeith M Wesolowski  */
2059ab5bb018SKeith M Wesolowski extern uint32_t _cpuid_skt(uint_t, uint_t, uint_t, uint_t);
2060ab5bb018SKeith M Wesolowski extern const char *_cpuid_sktstr(uint_t, uint_t, uint_t, uint_t);
206122e4c3acSKeith M Wesolowski extern x86_chiprev_t _cpuid_chiprev(uint_t, uint_t, uint_t, uint_t);
2062ab5bb018SKeith M Wesolowski extern const char *_cpuid_chiprevstr(uint_t, uint_t, uint_t, uint_t);
206322e4c3acSKeith M Wesolowski extern x86_uarchrev_t _cpuid_uarchrev(uint_t, uint_t, uint_t, uint_t);
2064ab5bb018SKeith M Wesolowski extern uint_t _cpuid_vendorstr_to_vendorcode(char *);
2065ab5bb018SKeith M Wesolowski 
2066ab5bb018SKeith M Wesolowski /*
2067ab5bb018SKeith M Wesolowski  * Apply up various platform-dependent restrictions where the
2068ab5bb018SKeith M Wesolowski  * underlying platform restrictions mean the CPU can be marked
2069ab5bb018SKeith M Wesolowski  * as less capable than its cpuid instruction would imply.
2070ab5bb018SKeith M Wesolowski  */
2071ab5bb018SKeith M Wesolowski #if defined(__xpv)
2072ab5bb018SKeith M Wesolowski static void
platform_cpuid_mangle(uint_t vendor,uint32_t eax,struct cpuid_regs * cp)2073ab5bb018SKeith M Wesolowski platform_cpuid_mangle(uint_t vendor, uint32_t eax, struct cpuid_regs *cp)
2074ab5bb018SKeith M Wesolowski {
2075ab5bb018SKeith M Wesolowski 	switch (eax) {
2076ab5bb018SKeith M Wesolowski 	case 1: {
2077ab5bb018SKeith M Wesolowski 		uint32_t mcamask = DOMAIN_IS_INITDOMAIN(xen_info) ?
2078ab5bb018SKeith M Wesolowski 		    0 : CPUID_INTC_EDX_MCA;
2079ab5bb018SKeith M Wesolowski 		cp->cp_edx &=
2080ab5bb018SKeith M Wesolowski 		    ~(mcamask |
2081ab5bb018SKeith M Wesolowski 		    CPUID_INTC_EDX_PSE |
2082ab5bb018SKeith M Wesolowski 		    CPUID_INTC_EDX_VME | CPUID_INTC_EDX_DE |
2083ab5bb018SKeith M Wesolowski 		    CPUID_INTC_EDX_SEP | CPUID_INTC_EDX_MTRR |
2084ab5bb018SKeith M Wesolowski 		    CPUID_INTC_EDX_PGE | CPUID_INTC_EDX_PAT |
2085ab5bb018SKeith M Wesolowski 		    CPUID_AMD_EDX_SYSC | CPUID_INTC_EDX_SEP |
2086ab5bb018SKeith M Wesolowski 		    CPUID_INTC_EDX_PSE36 | CPUID_INTC_EDX_HTT);
2087ab5bb018SKeith M Wesolowski 		break;
2088ab5bb018SKeith M Wesolowski 	}
2089ab5bb018SKeith M Wesolowski 
2090ab5bb018SKeith M Wesolowski 	case 0x80000001:
2091ab5bb018SKeith M Wesolowski 		cp->cp_edx &=
2092ab5bb018SKeith M Wesolowski 		    ~(CPUID_AMD_EDX_PSE |
2093ab5bb018SKeith M Wesolowski 		    CPUID_INTC_EDX_VME | CPUID_INTC_EDX_DE |
2094ab5bb018SKeith M Wesolowski 		    CPUID_AMD_EDX_MTRR | CPUID_AMD_EDX_PGE |
2095ab5bb018SKeith M Wesolowski 		    CPUID_AMD_EDX_PAT | CPUID_AMD_EDX_PSE36 |
2096ab5bb018SKeith M Wesolowski 		    CPUID_AMD_EDX_SYSC | CPUID_INTC_EDX_SEP |
2097ab5bb018SKeith M Wesolowski 		    CPUID_AMD_EDX_TSCP);
2098ab5bb018SKeith M Wesolowski 		cp->cp_ecx &= ~CPUID_AMD_ECX_CMP_LGCY;
2099ab5bb018SKeith M Wesolowski 		break;
2100ab5bb018SKeith M Wesolowski 	default:
2101ab5bb018SKeith M Wesolowski 		break;
2102ab5bb018SKeith M Wesolowski 	}
2103ab5bb018SKeith M Wesolowski 
2104ab5bb018SKeith M Wesolowski 	switch (vendor) {
2105ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
2106ab5bb018SKeith M Wesolowski 		switch (eax) {
2107ab5bb018SKeith M Wesolowski 		case 4:
2108ab5bb018SKeith M Wesolowski 			/*
2109ab5bb018SKeith M Wesolowski 			 * Zero out the (ncores-per-chip - 1) field
2110ab5bb018SKeith M Wesolowski 			 */
2111ab5bb018SKeith M Wesolowski 			cp->cp_eax &= 0x03fffffff;
2112ab5bb018SKeith M Wesolowski 			break;
2113ab5bb018SKeith M Wesolowski 		default:
2114ab5bb018SKeith M Wesolowski 			break;
2115ab5bb018SKeith M Wesolowski 		}
2116ab5bb018SKeith M Wesolowski 		break;
2117ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
2118ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
2119ab5bb018SKeith M Wesolowski 		switch (eax) {
2120ab5bb018SKeith M Wesolowski 
2121ab5bb018SKeith M Wesolowski 		case 0x80000001:
2122ab5bb018SKeith M Wesolowski 			cp->cp_ecx &= ~CPUID_AMD_ECX_CR8D;
2123ab5bb018SKeith M Wesolowski 			break;
2124ab5bb018SKeith M Wesolowski 
2125ab5bb018SKeith M Wesolowski 		case CPUID_LEAF_EXT_8:
2126ab5bb018SKeith M Wesolowski 			/*
2127ab5bb018SKeith M Wesolowski 			 * Zero out the (ncores-per-chip - 1) field
2128ab5bb018SKeith M Wesolowski 			 */
2129ab5bb018SKeith M Wesolowski 			cp->cp_ecx &= 0xffffff00;
2130ab5bb018SKeith M Wesolowski 			break;
2131ab5bb018SKeith M Wesolowski 		default:
2132ab5bb018SKeith M Wesolowski 			break;
2133ab5bb018SKeith M Wesolowski 		}
2134ab5bb018SKeith M Wesolowski 		break;
2135ab5bb018SKeith M Wesolowski 	default:
2136ab5bb018SKeith M Wesolowski 		break;
2137ab5bb018SKeith M Wesolowski 	}
2138ab5bb018SKeith M Wesolowski }
2139ab5bb018SKeith M Wesolowski #else
2140ab5bb018SKeith M Wesolowski #define	platform_cpuid_mangle(vendor, eax, cp)	/* nothing */
2141ab5bb018SKeith M Wesolowski #endif
2142ab5bb018SKeith M Wesolowski 
2143ab5bb018SKeith M Wesolowski /*
2144ab5bb018SKeith M Wesolowski  *  Some undocumented ways of patching the results of the cpuid
2145ab5bb018SKeith M Wesolowski  *  instruction to permit running Solaris 10 on future cpus that
2146ab5bb018SKeith M Wesolowski  *  we don't currently support.  Could be set to non-zero values
2147ab5bb018SKeith M Wesolowski  *  via settings in eeprom.
2148ab5bb018SKeith M Wesolowski  */
2149ab5bb018SKeith M Wesolowski 
2150ab5bb018SKeith M Wesolowski uint32_t cpuid_feature_ecx_include;
2151ab5bb018SKeith M Wesolowski uint32_t cpuid_feature_ecx_exclude;
2152ab5bb018SKeith M Wesolowski uint32_t cpuid_feature_edx_include;
2153ab5bb018SKeith M Wesolowski uint32_t cpuid_feature_edx_exclude;
2154ab5bb018SKeith M Wesolowski 
2155ab5bb018SKeith M Wesolowski /*
2156ab5bb018SKeith M Wesolowski  * Allocate space for mcpu_cpi in the machcpu structure for all non-boot CPUs.
2157ab5bb018SKeith M Wesolowski  */
2158ab5bb018SKeith M Wesolowski void
cpuid_alloc_space(cpu_t * cpu)2159ab5bb018SKeith M Wesolowski cpuid_alloc_space(cpu_t *cpu)
2160ab5bb018SKeith M Wesolowski {
2161ab5bb018SKeith M Wesolowski 	/*
2162ab5bb018SKeith M Wesolowski 	 * By convention, cpu0 is the boot cpu, which is set up
2163ab5bb018SKeith M Wesolowski 	 * before memory allocation is available.  All other cpus get
2164ab5bb018SKeith M Wesolowski 	 * their cpuid_info struct allocated here.
2165ab5bb018SKeith M Wesolowski 	 */
2166ab5bb018SKeith M Wesolowski 	ASSERT(cpu->cpu_id != 0);
2167ab5bb018SKeith M Wesolowski 	ASSERT(cpu->cpu_m.mcpu_cpi == NULL);
2168ab5bb018SKeith M Wesolowski 	cpu->cpu_m.mcpu_cpi =
2169ab5bb018SKeith M Wesolowski 	    kmem_zalloc(sizeof (*cpu->cpu_m.mcpu_cpi), KM_SLEEP);
2170ab5bb018SKeith M Wesolowski }
2171ab5bb018SKeith M Wesolowski 
2172ab5bb018SKeith M Wesolowski void
cpuid_free_space(cpu_t * cpu)2173ab5bb018SKeith M Wesolowski cpuid_free_space(cpu_t *cpu)
2174ab5bb018SKeith M Wesolowski {
2175ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
2176ab5bb018SKeith M Wesolowski 	int i;
2177ab5bb018SKeith M Wesolowski 
2178ab5bb018SKeith M Wesolowski 	ASSERT(cpi != NULL);
2179ab5bb018SKeith M Wesolowski 	ASSERT(cpi != &cpuid_info0);
2180ab5bb018SKeith M Wesolowski 
2181ab5bb018SKeith M Wesolowski 	/*
2182ab5bb018SKeith M Wesolowski 	 * Free up any cache leaf related dynamic storage. The first entry was
2183ab5bb018SKeith M Wesolowski 	 * cached from the standard cpuid storage, so we should not free it.
2184ab5bb018SKeith M Wesolowski 	 */
2185ab5bb018SKeith M Wesolowski 	for (i = 1; i < cpi->cpi_cache_leaf_size; i++)
2186ab5bb018SKeith M Wesolowski 		kmem_free(cpi->cpi_cache_leaves[i], sizeof (struct cpuid_regs));
2187ab5bb018SKeith M Wesolowski 	if (cpi->cpi_cache_leaf_size > 0)
2188ab5bb018SKeith M Wesolowski 		kmem_free(cpi->cpi_cache_leaves,
2189ab5bb018SKeith M Wesolowski 		    cpi->cpi_cache_leaf_size * sizeof (struct cpuid_regs *));
2190ab5bb018SKeith M Wesolowski 
2191ab5bb018SKeith M Wesolowski 	kmem_free(cpi, sizeof (*cpi));
2192ab5bb018SKeith M Wesolowski 	cpu->cpu_m.mcpu_cpi = NULL;
2193ab5bb018SKeith M Wesolowski }
2194ab5bb018SKeith M Wesolowski 
2195ab5bb018SKeith M Wesolowski #if !defined(__xpv)
2196ab5bb018SKeith M Wesolowski /*
2197ab5bb018SKeith M Wesolowski  * Determine the type of the underlying platform. This is used to customize
2198ab5bb018SKeith M Wesolowski  * initialization of various subsystems (e.g. TSC). determine_platform() must
2199ab5bb018SKeith M Wesolowski  * only ever be called once to prevent two processors from seeing different
2200ab5bb018SKeith M Wesolowski  * values of platform_type. Must be called before cpuid_pass_ident(), the
2201ab5bb018SKeith M Wesolowski  * earliest consumer to execute; the identification pass will call
2202ab5bb018SKeith M Wesolowski  * synth_amd_info() to compute the chiprev, which in turn calls get_hwenv().
2203ab5bb018SKeith M Wesolowski  */
2204ab5bb018SKeith M Wesolowski void
determine_platform(void)2205ab5bb018SKeith M Wesolowski determine_platform(void)
2206ab5bb018SKeith M Wesolowski {
2207ab5bb018SKeith M Wesolowski 	struct cpuid_regs cp;
2208ab5bb018SKeith M Wesolowski 	uint32_t base;
2209ab5bb018SKeith M Wesolowski 	uint32_t regs[4];
2210ab5bb018SKeith M Wesolowski 	char *hvstr = (char *)regs;
2211ab5bb018SKeith M Wesolowski 
2212ab5bb018SKeith M Wesolowski 	ASSERT(platform_type == -1);
2213ab5bb018SKeith M Wesolowski 
2214ab5bb018SKeith M Wesolowski 	platform_type = HW_NATIVE;
2215ab5bb018SKeith M Wesolowski 
2216ab5bb018SKeith M Wesolowski 	if (!enable_platform_detection)
2217ab5bb018SKeith M Wesolowski 		return;
2218ab5bb018SKeith M Wesolowski 
2219ab5bb018SKeith M Wesolowski 	/*
2220ab5bb018SKeith M Wesolowski 	 * If Hypervisor CPUID bit is set, try to determine hypervisor
2221ab5bb018SKeith M Wesolowski 	 * vendor signature, and set platform type accordingly.
2222ab5bb018SKeith M Wesolowski 	 *
2223ab5bb018SKeith M Wesolowski 	 * References:
2224ab5bb018SKeith M Wesolowski 	 * http://lkml.org/lkml/2008/10/1/246
2225ab5bb018SKeith M Wesolowski 	 * http://kb.vmware.com/kb/1009458
2226ab5bb018SKeith M Wesolowski 	 */
2227ab5bb018SKeith M Wesolowski 	cp.cp_eax = 0x1;
2228ab5bb018SKeith M Wesolowski 	(void) __cpuid_insn(&cp);
2229ab5bb018SKeith M Wesolowski 	if ((cp.cp_ecx & CPUID_INTC_ECX_HV) != 0) {
2230ab5bb018SKeith M Wesolowski 		cp.cp_eax = 0x40000000;
2231ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(&cp);
2232ab5bb018SKeith M Wesolowski 		regs[0] = cp.cp_ebx;
2233ab5bb018SKeith M Wesolowski 		regs[1] = cp.cp_ecx;
2234ab5bb018SKeith M Wesolowski 		regs[2] = cp.cp_edx;
2235ab5bb018SKeith M Wesolowski 		regs[3] = 0;
2236ab5bb018SKeith M Wesolowski 		if (strcmp(hvstr, HVSIG_XEN_HVM) == 0) {
2237ab5bb018SKeith M Wesolowski 			platform_type = HW_XEN_HVM;
2238ab5bb018SKeith M Wesolowski 			return;
2239ab5bb018SKeith M Wesolowski 		}
2240ab5bb018SKeith M Wesolowski 		if (strcmp(hvstr, HVSIG_VMWARE) == 0) {
2241ab5bb018SKeith M Wesolowski 			platform_type = HW_VMWARE;
2242ab5bb018SKeith M Wesolowski 			return;
2243ab5bb018SKeith M Wesolowski 		}
2244ab5bb018SKeith M Wesolowski 		if (strcmp(hvstr, HVSIG_KVM) == 0) {
2245ab5bb018SKeith M Wesolowski 			platform_type = HW_KVM;
2246ab5bb018SKeith M Wesolowski 			return;
2247ab5bb018SKeith M Wesolowski 		}
2248ab5bb018SKeith M Wesolowski 		if (strcmp(hvstr, HVSIG_BHYVE) == 0) {
2249ab5bb018SKeith M Wesolowski 			platform_type = HW_BHYVE;
2250ab5bb018SKeith M Wesolowski 			return;
2251ab5bb018SKeith M Wesolowski 		}
22522faf06a0SToomas Soome 		if (strcmp(hvstr, HVSIG_MICROSOFT) == 0) {
2253ab5bb018SKeith M Wesolowski 			platform_type = HW_MICROSOFT;
22542faf06a0SToomas Soome 			return;
22552faf06a0SToomas Soome 		}
22562faf06a0SToomas Soome 		if (strcmp(hvstr, HVSIG_QEMU_TCG) == 0) {
22572faf06a0SToomas Soome 			platform_type = HW_QEMU_TCG;
22582faf06a0SToomas Soome 			return;
22592faf06a0SToomas Soome 		}
2260068cf9dcSPatrick Mooney 		if (strcmp(hvstr, HVSIG_VIRTUALBOX) == 0) {
2261068cf9dcSPatrick Mooney 			platform_type = HW_VIRTUALBOX;
2262068cf9dcSPatrick Mooney 			return;
2263068cf9dcSPatrick Mooney 		}
2264068cf9dcSPatrick Mooney 		if (strcmp(hvstr, HVSIG_ACRN) == 0) {
2265068cf9dcSPatrick Mooney 			platform_type = HW_ACRN;
2266068cf9dcSPatrick Mooney 			return;
2267068cf9dcSPatrick Mooney 		}
2268ab5bb018SKeith M Wesolowski 	} else {
2269ab5bb018SKeith M Wesolowski 		/*
2270ab5bb018SKeith M Wesolowski 		 * Check older VMware hardware versions. VMware hypervisor is
2271ab5bb018SKeith M Wesolowski 		 * detected by performing an IN operation to VMware hypervisor
2272ab5bb018SKeith M Wesolowski 		 * port and checking that value returned in %ebx is VMware
2273ab5bb018SKeith M Wesolowski 		 * hypervisor magic value.
2274ab5bb018SKeith M Wesolowski 		 *
2275ab5bb018SKeith M Wesolowski 		 * References: http://kb.vmware.com/kb/1009458
2276ab5bb018SKeith M Wesolowski 		 */
2277ab5bb018SKeith M Wesolowski 		vmware_port(VMWARE_HVCMD_GETVERSION, regs);
2278ab5bb018SKeith M Wesolowski 		if (regs[1] == VMWARE_HVMAGIC) {
2279ab5bb018SKeith M Wesolowski 			platform_type = HW_VMWARE;
2280ab5bb018SKeith M Wesolowski 			return;
2281ab5bb018SKeith M Wesolowski 		}
2282ab5bb018SKeith M Wesolowski 	}
2283ab5bb018SKeith M Wesolowski 
2284ab5bb018SKeith M Wesolowski 	/*
2285ab5bb018SKeith M Wesolowski 	 * Check Xen hypervisor. In a fully virtualized domain,
2286ab5bb018SKeith M Wesolowski 	 * Xen's pseudo-cpuid function returns a string representing the
2287ab5bb018SKeith M Wesolowski 	 * Xen signature in %ebx, %ecx, and %edx. %eax contains the maximum
2288ab5bb018SKeith M Wesolowski 	 * supported cpuid function. We need at least a (base + 2) leaf value
2289ab5bb018SKeith M Wesolowski 	 * to do what we want to do. Try different base values, since the
2290ab5bb018SKeith M Wesolowski 	 * hypervisor might use a different one depending on whether Hyper-V
2291ab5bb018SKeith M Wesolowski 	 * emulation is switched on by default or not.
2292ab5bb018SKeith M Wesolowski 	 */
2293ab5bb018SKeith M Wesolowski 	for (base = 0x40000000; base < 0x40010000; base += 0x100) {
2294ab5bb018SKeith M Wesolowski 		cp.cp_eax = base;
2295ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(&cp);
2296ab5bb018SKeith M Wesolowski 		regs[0] = cp.cp_ebx;
2297ab5bb018SKeith M Wesolowski 		regs[1] = cp.cp_ecx;
2298ab5bb018SKeith M Wesolowski 		regs[2] = cp.cp_edx;
2299ab5bb018SKeith M Wesolowski 		regs[3] = 0;
2300ab5bb018SKeith M Wesolowski 		if (strcmp(hvstr, HVSIG_XEN_HVM) == 0 &&
2301ab5bb018SKeith M Wesolowski 		    cp.cp_eax >= (base + 2)) {
2302ab5bb018SKeith M Wesolowski 			platform_type &= ~HW_NATIVE;
2303ab5bb018SKeith M Wesolowski 			platform_type |= HW_XEN_HVM;
2304ab5bb018SKeith M Wesolowski 			return;
2305ab5bb018SKeith M Wesolowski 		}
2306ab5bb018SKeith M Wesolowski 	}
2307ab5bb018SKeith M Wesolowski }
2308ab5bb018SKeith M Wesolowski 
2309ab5bb018SKeith M Wesolowski int
get_hwenv(void)2310ab5bb018SKeith M Wesolowski get_hwenv(void)
2311ab5bb018SKeith M Wesolowski {
2312ab5bb018SKeith M Wesolowski 	ASSERT(platform_type != -1);
2313ab5bb018SKeith M Wesolowski 	return (platform_type);
2314ab5bb018SKeith M Wesolowski }
2315ab5bb018SKeith M Wesolowski 
2316ab5bb018SKeith M Wesolowski int
is_controldom(void)2317ab5bb018SKeith M Wesolowski is_controldom(void)
2318ab5bb018SKeith M Wesolowski {
2319ab5bb018SKeith M Wesolowski 	return (0);
2320ab5bb018SKeith M Wesolowski }
2321ab5bb018SKeith M Wesolowski 
2322ab5bb018SKeith M Wesolowski #else
2323ab5bb018SKeith M Wesolowski 
2324ab5bb018SKeith M Wesolowski int
get_hwenv(void)2325ab5bb018SKeith M Wesolowski get_hwenv(void)
2326ab5bb018SKeith M Wesolowski {
2327ab5bb018SKeith M Wesolowski 	return (HW_XEN_PV);
2328ab5bb018SKeith M Wesolowski }
2329ab5bb018SKeith M Wesolowski 
2330ab5bb018SKeith M Wesolowski int
is_controldom(void)2331ab5bb018SKeith M Wesolowski is_controldom(void)
2332ab5bb018SKeith M Wesolowski {
2333ab5bb018SKeith M Wesolowski 	return (DOMAIN_IS_INITDOMAIN(xen_info));
2334ab5bb018SKeith M Wesolowski }
2335ab5bb018SKeith M Wesolowski 
2336ab5bb018SKeith M Wesolowski #endif	/* __xpv */
2337ab5bb018SKeith M Wesolowski 
2338ab5bb018SKeith M Wesolowski /*
2339dd23d762SRobert Mustacchi  * Gather the extended topology information. This should be the same for both
2340dd23d762SRobert Mustacchi  * AMD leaf 8X26 and Intel leaf 0x1F (though the data interpretation varies).
2341dd23d762SRobert Mustacchi  */
2342dd23d762SRobert Mustacchi static void
cpuid_gather_ext_topo_leaf(struct cpuid_info * cpi,uint32_t leaf)2343dd23d762SRobert Mustacchi cpuid_gather_ext_topo_leaf(struct cpuid_info *cpi, uint32_t leaf)
2344dd23d762SRobert Mustacchi {
2345dd23d762SRobert Mustacchi 	uint_t i;
2346dd23d762SRobert Mustacchi 
2347dd23d762SRobert Mustacchi 	for (i = 0; i < ARRAY_SIZE(cpi->cpi_topo); i++) {
2348dd23d762SRobert Mustacchi 		struct cpuid_regs *regs = &cpi->cpi_topo[i];
2349dd23d762SRobert Mustacchi 
2350dd23d762SRobert Mustacchi 		bzero(regs, sizeof (struct cpuid_regs));
2351dd23d762SRobert Mustacchi 		regs->cp_eax = leaf;
2352dd23d762SRobert Mustacchi 		regs->cp_ecx = i;
2353dd23d762SRobert Mustacchi 
2354dd23d762SRobert Mustacchi 		(void) __cpuid_insn(regs);
2355dd23d762SRobert Mustacchi 		if (CPUID_AMD_8X26_ECX_TYPE(regs->cp_ecx) ==
2356dd23d762SRobert Mustacchi 		    CPUID_AMD_8X26_TYPE_DONE) {
2357dd23d762SRobert Mustacchi 			break;
2358dd23d762SRobert Mustacchi 		}
2359dd23d762SRobert Mustacchi 	}
2360dd23d762SRobert Mustacchi 
2361dd23d762SRobert Mustacchi 	cpi->cpi_topo_nleaves = i;
2362dd23d762SRobert Mustacchi }
2363dd23d762SRobert Mustacchi 
2364dd23d762SRobert Mustacchi /*
2365ab5bb018SKeith M Wesolowski  * Make sure that we have gathered all of the CPUID leaves that we might need to
2366ab5bb018SKeith M Wesolowski  * determine topology. We assume that the standard leaf 1 has already been done
2367ab5bb018SKeith M Wesolowski  * and that xmaxeax has already been calculated.
2368ab5bb018SKeith M Wesolowski  */
2369ab5bb018SKeith M Wesolowski static void
cpuid_gather_amd_topology_leaves(cpu_t * cpu)2370ab5bb018SKeith M Wesolowski cpuid_gather_amd_topology_leaves(cpu_t *cpu)
2371ab5bb018SKeith M Wesolowski {
2372ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
2373ab5bb018SKeith M Wesolowski 
2374ab5bb018SKeith M Wesolowski 	if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) {
2375ab5bb018SKeith M Wesolowski 		struct cpuid_regs *cp;
2376ab5bb018SKeith M Wesolowski 
2377ab5bb018SKeith M Wesolowski 		cp = &cpi->cpi_extd[8];
2378ab5bb018SKeith M Wesolowski 		cp->cp_eax = CPUID_LEAF_EXT_8;
2379ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(cp);
2380ab5bb018SKeith M Wesolowski 		platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_8, cp);
2381ab5bb018SKeith M Wesolowski 	}
2382ab5bb018SKeith M Wesolowski 
2383ab5bb018SKeith M Wesolowski 	if (is_x86_feature(x86_featureset, X86FSET_TOPOEXT) &&
2384ab5bb018SKeith M Wesolowski 	    cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) {
2385ab5bb018SKeith M Wesolowski 		struct cpuid_regs *cp;
2386ab5bb018SKeith M Wesolowski 
2387ab5bb018SKeith M Wesolowski 		cp = &cpi->cpi_extd[0x1e];
2388ab5bb018SKeith M Wesolowski 		cp->cp_eax = CPUID_LEAF_EXT_1e;
2389ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(cp);
2390ab5bb018SKeith M Wesolowski 	}
2391dd23d762SRobert Mustacchi 
2392dd23d762SRobert Mustacchi 	if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_26) {
2393dd23d762SRobert Mustacchi 		cpuid_gather_ext_topo_leaf(cpi, CPUID_LEAF_EXT_26);
2394dd23d762SRobert Mustacchi 	}
2395ab5bb018SKeith M Wesolowski }
2396ab5bb018SKeith M Wesolowski 
2397ab5bb018SKeith M Wesolowski /*
2398ab5bb018SKeith M Wesolowski  * Get the APIC ID for this processor. If Leaf B is present and valid, we prefer
2399ab5bb018SKeith M Wesolowski  * it to everything else. If not, and we're on an AMD system where 8000001e is
2400ab5bb018SKeith M Wesolowski  * valid, then we use that. Othewrise, we fall back to the default value for the
2401ab5bb018SKeith M Wesolowski  * APIC ID in leaf 1.
2402ab5bb018SKeith M Wesolowski  */
2403ab5bb018SKeith M Wesolowski static uint32_t
cpuid_gather_apicid(struct cpuid_info * cpi)2404ab5bb018SKeith M Wesolowski cpuid_gather_apicid(struct cpuid_info *cpi)
2405ab5bb018SKeith M Wesolowski {
2406ab5bb018SKeith M Wesolowski 	/*
2407dd23d762SRobert Mustacchi 	 * Leaf B changes based on the arguments to it. Because we don't cache
2408ab5bb018SKeith M Wesolowski 	 * it, we need to gather it again.
2409ab5bb018SKeith M Wesolowski 	 */
2410ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax >= 0xB) {
2411ab5bb018SKeith M Wesolowski 		struct cpuid_regs regs;
2412ab5bb018SKeith M Wesolowski 		struct cpuid_regs *cp;
2413ab5bb018SKeith M Wesolowski 
2414ab5bb018SKeith M Wesolowski 		cp = &regs;
2415ab5bb018SKeith M Wesolowski 		cp->cp_eax = 0xB;
2416ab5bb018SKeith M Wesolowski 		cp->cp_edx = cp->cp_ebx = cp->cp_ecx = 0;
2417ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(cp);
2418ab5bb018SKeith M Wesolowski 
2419ab5bb018SKeith M Wesolowski 		if (cp->cp_ebx != 0) {
2420ab5bb018SKeith M Wesolowski 			return (cp->cp_edx);
2421ab5bb018SKeith M Wesolowski 		}
2422ab5bb018SKeith M Wesolowski 	}
2423ab5bb018SKeith M Wesolowski 
2424ab5bb018SKeith M Wesolowski 	if ((cpi->cpi_vendor == X86_VENDOR_AMD ||
2425ab5bb018SKeith M Wesolowski 	    cpi->cpi_vendor == X86_VENDOR_HYGON) &&
2426ab5bb018SKeith M Wesolowski 	    is_x86_feature(x86_featureset, X86FSET_TOPOEXT) &&
2427ab5bb018SKeith M Wesolowski 	    cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) {
2428ab5bb018SKeith M Wesolowski 		return (cpi->cpi_extd[0x1e].cp_eax);
2429ab5bb018SKeith M Wesolowski 	}
2430ab5bb018SKeith M Wesolowski 
2431ab5bb018SKeith M Wesolowski 	return (CPI_APIC_ID(cpi));
2432ab5bb018SKeith M Wesolowski }
2433ab5bb018SKeith M Wesolowski 
2434ab5bb018SKeith M Wesolowski /*
2435ab5bb018SKeith M Wesolowski  * For AMD processors, attempt to calculate the number of chips and cores that
2436ab5bb018SKeith M Wesolowski  * exist. The way that we do this varies based on the generation, because the
2437ab5bb018SKeith M Wesolowski  * generations themselves have changed dramatically.
2438ab5bb018SKeith M Wesolowski  *
2439ab5bb018SKeith M Wesolowski  * If cpuid leaf 0x80000008 exists, that generally tells us the number of cores.
2440ab5bb018SKeith M Wesolowski  * However, with the advent of family 17h (Zen) it actually tells us the number
2441ab5bb018SKeith M Wesolowski  * of threads, so we need to look at leaf 0x8000001e if available to determine
2442ab5bb018SKeith M Wesolowski  * its value. Otherwise, for all prior families, the number of enabled cores is
2443ab5bb018SKeith M Wesolowski  * the same as threads.
2444ab5bb018SKeith M Wesolowski  *
2445ab5bb018SKeith M Wesolowski  * If we do not have leaf 0x80000008, then we assume that this processor does
2446ab5bb018SKeith M Wesolowski  * not have anything. AMD's older CPUID specification says there's no reason to
2447ab5bb018SKeith M Wesolowski  * fall back to leaf 1.
2448ab5bb018SKeith M Wesolowski  *
2449ab5bb018SKeith M Wesolowski  * In some virtualization cases we will not have leaf 8000001e or it will be
2450ab5bb018SKeith M Wesolowski  * zero. When that happens we assume the number of threads is one.
2451ab5bb018SKeith M Wesolowski  */
2452ab5bb018SKeith M Wesolowski static void
cpuid_amd_ncores(struct cpuid_info * cpi,uint_t * ncpus,uint_t * ncores)2453ab5bb018SKeith M Wesolowski cpuid_amd_ncores(struct cpuid_info *cpi, uint_t *ncpus, uint_t *ncores)
2454ab5bb018SKeith M Wesolowski {
2455ab5bb018SKeith M Wesolowski 	uint_t nthreads, nthread_per_core;
2456ab5bb018SKeith M Wesolowski 
2457ab5bb018SKeith M Wesolowski 	nthreads = nthread_per_core = 1;
2458ab5bb018SKeith M Wesolowski 
2459ab5bb018SKeith M Wesolowski 	if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) {
2460ab5bb018SKeith M Wesolowski 		nthreads = BITX(cpi->cpi_extd[8].cp_ecx, 7, 0) + 1;
2461ab5bb018SKeith M Wesolowski 	} else if ((cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_HTT) != 0) {
2462ab5bb018SKeith M Wesolowski 		nthreads = CPI_CPU_COUNT(cpi);
2463ab5bb018SKeith M Wesolowski 	}
2464ab5bb018SKeith M Wesolowski 
2465ab5bb018SKeith M Wesolowski 	/*
2466ab5bb018SKeith M Wesolowski 	 * For us to have threads, and know about it, we have to be at least at
2467ab5bb018SKeith M Wesolowski 	 * family 17h and have the cpuid bit that says we have extended
2468ab5bb018SKeith M Wesolowski 	 * topology.
2469ab5bb018SKeith M Wesolowski 	 */
2470ab5bb018SKeith M Wesolowski 	if (cpi->cpi_family >= 0x17 &&
2471ab5bb018SKeith M Wesolowski 	    is_x86_feature(x86_featureset, X86FSET_TOPOEXT) &&
2472ab5bb018SKeith M Wesolowski 	    cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) {
2473ab5bb018SKeith M Wesolowski 		nthread_per_core = BITX(cpi->cpi_extd[0x1e].cp_ebx, 15, 8) + 1;
2474ab5bb018SKeith M Wesolowski 	}
2475ab5bb018SKeith M Wesolowski 
2476ab5bb018SKeith M Wesolowski 	*ncpus = nthreads;
2477ab5bb018SKeith M Wesolowski 	*ncores = nthreads / nthread_per_core;
2478ab5bb018SKeith M Wesolowski }
2479ab5bb018SKeith M Wesolowski 
2480ab5bb018SKeith M Wesolowski /*
2481ab5bb018SKeith M Wesolowski  * Seed the initial values for the cores and threads for an Intel based
2482ab5bb018SKeith M Wesolowski  * processor. These values will be overwritten if we detect that the processor
2483ab5bb018SKeith M Wesolowski  * supports CPUID leaf 0xb.
2484ab5bb018SKeith M Wesolowski  */
2485ab5bb018SKeith M Wesolowski static void
cpuid_intel_ncores(struct cpuid_info * cpi,uint_t * ncpus,uint_t * ncores)2486ab5bb018SKeith M Wesolowski cpuid_intel_ncores(struct cpuid_info *cpi, uint_t *ncpus, uint_t *ncores)
2487ab5bb018SKeith M Wesolowski {
2488ab5bb018SKeith M Wesolowski 	/*
2489ab5bb018SKeith M Wesolowski 	 * Only seed the number of physical cores from the first level leaf 4
2490ab5bb018SKeith M Wesolowski 	 * information. The number of threads there indicate how many share the
2491ab5bb018SKeith M Wesolowski 	 * L1 cache, which may or may not have anything to do with the number of
2492ab5bb018SKeith M Wesolowski 	 * logical CPUs per core.
2493ab5bb018SKeith M Wesolowski 	 */
2494ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax >= 4) {
2495ab5bb018SKeith M Wesolowski 		*ncores = BITX(cpi->cpi_std[4].cp_eax, 31, 26) + 1;
2496ab5bb018SKeith M Wesolowski 	} else {
2497ab5bb018SKeith M Wesolowski 		*ncores = 1;
2498ab5bb018SKeith M Wesolowski 	}
2499ab5bb018SKeith M Wesolowski 
2500ab5bb018SKeith M Wesolowski 	if ((cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_HTT) != 0) {
2501ab5bb018SKeith M Wesolowski 		*ncpus = CPI_CPU_COUNT(cpi);
2502ab5bb018SKeith M Wesolowski 	} else {
2503ab5bb018SKeith M Wesolowski 		*ncpus = *ncores;
2504ab5bb018SKeith M Wesolowski 	}
2505ab5bb018SKeith M Wesolowski }
2506ab5bb018SKeith M Wesolowski 
2507ab5bb018SKeith M Wesolowski static boolean_t
cpuid_leafB_getids(cpu_t * cpu)2508ab5bb018SKeith M Wesolowski cpuid_leafB_getids(cpu_t *cpu)
2509ab5bb018SKeith M Wesolowski {
2510ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
2511ab5bb018SKeith M Wesolowski 	struct cpuid_regs regs;
2512ab5bb018SKeith M Wesolowski 	struct cpuid_regs *cp;
2513ab5bb018SKeith M Wesolowski 
2514ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax < 0xB)
2515ab5bb018SKeith M Wesolowski 		return (B_FALSE);
2516ab5bb018SKeith M Wesolowski 
2517ab5bb018SKeith M Wesolowski 	cp = &regs;
2518ab5bb018SKeith M Wesolowski 	cp->cp_eax = 0xB;
2519ab5bb018SKeith M Wesolowski 	cp->cp_edx = cp->cp_ebx = cp->cp_ecx = 0;
2520ab5bb018SKeith M Wesolowski 
2521ab5bb018SKeith M Wesolowski 	(void) __cpuid_insn(cp);
2522ab5bb018SKeith M Wesolowski 
2523ab5bb018SKeith M Wesolowski 	/*
2524ab5bb018SKeith M Wesolowski 	 * Check CPUID.EAX=0BH, ECX=0H:EBX is non-zero, which
2525ab5bb018SKeith M Wesolowski 	 * indicates that the extended topology enumeration leaf is
2526ab5bb018SKeith M Wesolowski 	 * available.
2527ab5bb018SKeith M Wesolowski 	 */
2528ab5bb018SKeith M Wesolowski 	if (cp->cp_ebx != 0) {
2529ab5bb018SKeith M Wesolowski 		uint32_t x2apic_id = 0;
2530ab5bb018SKeith M Wesolowski 		uint_t coreid_shift = 0;
2531ab5bb018SKeith M Wesolowski 		uint_t ncpu_per_core = 1;
2532ab5bb018SKeith M Wesolowski 		uint_t chipid_shift = 0;
2533ab5bb018SKeith M Wesolowski 		uint_t ncpu_per_chip = 1;
2534ab5bb018SKeith M Wesolowski 		uint_t i;
2535ab5bb018SKeith M Wesolowski 		uint_t level;
2536ab5bb018SKeith M Wesolowski 
2537ab5bb018SKeith M Wesolowski 		for (i = 0; i < CPI_FNB_ECX_MAX; i++) {
2538ab5bb018SKeith M Wesolowski 			cp->cp_eax = 0xB;
2539ab5bb018SKeith M Wesolowski 			cp->cp_ecx = i;
2540ab5bb018SKeith M Wesolowski 
2541ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(cp);
2542ab5bb018SKeith M Wesolowski 			level = CPI_CPU_LEVEL_TYPE(cp);
2543ab5bb018SKeith M Wesolowski 
2544ab5bb018SKeith M Wesolowski 			if (level == 1) {
2545ab5bb018SKeith M Wesolowski 				x2apic_id = cp->cp_edx;
2546ab5bb018SKeith M Wesolowski 				coreid_shift = BITX(cp->cp_eax, 4, 0);
2547ab5bb018SKeith M Wesolowski 				ncpu_per_core = BITX(cp->cp_ebx, 15, 0);
2548ab5bb018SKeith M Wesolowski 			} else if (level == 2) {
2549ab5bb018SKeith M Wesolowski 				x2apic_id = cp->cp_edx;
2550ab5bb018SKeith M Wesolowski 				chipid_shift = BITX(cp->cp_eax, 4, 0);
2551ab5bb018SKeith M Wesolowski 				ncpu_per_chip = BITX(cp->cp_ebx, 15, 0);
2552ab5bb018SKeith M Wesolowski 			}
2553ab5bb018SKeith M Wesolowski 		}
2554ab5bb018SKeith M Wesolowski 
2555ab5bb018SKeith M Wesolowski 		/*
2556ab5bb018SKeith M Wesolowski 		 * cpi_apicid is taken care of in cpuid_gather_apicid.
2557ab5bb018SKeith M Wesolowski 		 */
2558ab5bb018SKeith M Wesolowski 		cpi->cpi_ncpu_per_chip = ncpu_per_chip;
2559ab5bb018SKeith M Wesolowski 		cpi->cpi_ncore_per_chip = ncpu_per_chip /
2560ab5bb018SKeith M Wesolowski 		    ncpu_per_core;
2561ab5bb018SKeith M Wesolowski 		cpi->cpi_chipid = x2apic_id >> chipid_shift;
2562ab5bb018SKeith M Wesolowski 		cpi->cpi_clogid = x2apic_id & ((1 << chipid_shift) - 1);
2563ab5bb018SKeith M Wesolowski 		cpi->cpi_coreid = x2apic_id >> coreid_shift;
2564ab5bb018SKeith M Wesolowski 		cpi->cpi_pkgcoreid = cpi->cpi_clogid >> coreid_shift;
2565ab5bb018SKeith M Wesolowski 		cpi->cpi_procnodeid = cpi->cpi_chipid;
2566ab5bb018SKeith M Wesolowski 		cpi->cpi_compunitid = cpi->cpi_coreid;
2567ab5bb018SKeith M Wesolowski 
2568ab5bb018SKeith M Wesolowski 		if (coreid_shift > 0 && chipid_shift > coreid_shift) {
2569ab5bb018SKeith M Wesolowski 			cpi->cpi_nthread_bits = coreid_shift;
2570ab5bb018SKeith M Wesolowski 			cpi->cpi_ncore_bits = chipid_shift - coreid_shift;
2571ab5bb018SKeith M Wesolowski 		}
2572ab5bb018SKeith M Wesolowski 
2573ab5bb018SKeith M Wesolowski 		return (B_TRUE);
2574ab5bb018SKeith M Wesolowski 	} else {
2575ab5bb018SKeith M Wesolowski 		return (B_FALSE);
2576ab5bb018SKeith M Wesolowski 	}
2577ab5bb018SKeith M Wesolowski }
2578ab5bb018SKeith M Wesolowski 
2579ab5bb018SKeith M Wesolowski static void
cpuid_intel_getids(cpu_t * cpu,void * feature)2580ab5bb018SKeith M Wesolowski cpuid_intel_getids(cpu_t *cpu, void *feature)
2581ab5bb018SKeith M Wesolowski {
2582ab5bb018SKeith M Wesolowski 	uint_t i;
2583ab5bb018SKeith M Wesolowski 	uint_t chipid_shift = 0;
2584ab5bb018SKeith M Wesolowski 	uint_t coreid_shift = 0;
2585ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
2586ab5bb018SKeith M Wesolowski 
2587ab5bb018SKeith M Wesolowski 	/*
2588ab5bb018SKeith M Wesolowski 	 * There are no compute units or processor nodes currently on Intel.
2589ab5bb018SKeith M Wesolowski 	 * Always set these to one.
2590ab5bb018SKeith M Wesolowski 	 */
2591ab5bb018SKeith M Wesolowski 	cpi->cpi_procnodes_per_pkg = 1;
2592ab5bb018SKeith M Wesolowski 	cpi->cpi_cores_per_compunit = 1;
2593ab5bb018SKeith M Wesolowski 
2594ab5bb018SKeith M Wesolowski 	/*
2595ab5bb018SKeith M Wesolowski 	 * If cpuid Leaf B is present, use that to try and get this information.
2596ab5bb018SKeith M Wesolowski 	 * It will be the most accurate for Intel CPUs.
2597ab5bb018SKeith M Wesolowski 	 */
2598ab5bb018SKeith M Wesolowski 	if (cpuid_leafB_getids(cpu))
2599ab5bb018SKeith M Wesolowski 		return;
2600ab5bb018SKeith M Wesolowski 
2601ab5bb018SKeith M Wesolowski 	/*
2602ab5bb018SKeith M Wesolowski 	 * In this case, we have the leaf 1 and leaf 4 values for ncpu_per_chip
2603ab5bb018SKeith M Wesolowski 	 * and ncore_per_chip. These represent the largest power of two values
2604ab5bb018SKeith M Wesolowski 	 * that we need to cover all of the IDs in the system. Therefore, we use
2605ab5bb018SKeith M Wesolowski 	 * those values to seed the number of bits needed to cover information
2606ab5bb018SKeith M Wesolowski 	 * in the case when leaf B is not available. These values will probably
2607ab5bb018SKeith M Wesolowski 	 * be larger than required, but that's OK.
2608ab5bb018SKeith M Wesolowski 	 */
2609ab5bb018SKeith M Wesolowski 	cpi->cpi_nthread_bits = ddi_fls(cpi->cpi_ncpu_per_chip);
2610ab5bb018SKeith M Wesolowski 	cpi->cpi_ncore_bits = ddi_fls(cpi->cpi_ncore_per_chip);
2611ab5bb018SKeith M Wesolowski 
2612ab5bb018SKeith M Wesolowski 	for (i = 1; i < cpi->cpi_ncpu_per_chip; i <<= 1)
2613ab5bb018SKeith M Wesolowski 		chipid_shift++;
2614ab5bb018SKeith M Wesolowski 
2615ab5bb018SKeith M Wesolowski 	cpi->cpi_chipid = cpi->cpi_apicid >> chipid_shift;
2616ab5bb018SKeith M Wesolowski 	cpi->cpi_clogid = cpi->cpi_apicid & ((1 << chipid_shift) - 1);
2617ab5bb018SKeith M Wesolowski 
2618ab5bb018SKeith M Wesolowski 	if (is_x86_feature(feature, X86FSET_CMP)) {
2619ab5bb018SKeith M Wesolowski 		/*
2620ab5bb018SKeith M Wesolowski 		 * Multi-core (and possibly multi-threaded)
2621ab5bb018SKeith M Wesolowski 		 * processors.
2622ab5bb018SKeith M Wesolowski 		 */
2623ab5bb018SKeith M Wesolowski 		uint_t ncpu_per_core = 0;
2624ab5bb018SKeith M Wesolowski 
2625ab5bb018SKeith M Wesolowski 		if (cpi->cpi_ncore_per_chip == 1)
2626ab5bb018SKeith M Wesolowski 			ncpu_per_core = cpi->cpi_ncpu_per_chip;
2627ab5bb018SKeith M Wesolowski 		else if (cpi->cpi_ncore_per_chip > 1)
2628ab5bb018SKeith M Wesolowski 			ncpu_per_core = cpi->cpi_ncpu_per_chip /
2629ab5bb018SKeith M Wesolowski 			    cpi->cpi_ncore_per_chip;
2630ab5bb018SKeith M Wesolowski 		/*
2631ab5bb018SKeith M Wesolowski 		 * 8bit APIC IDs on dual core Pentiums
2632ab5bb018SKeith M Wesolowski 		 * look like this:
2633ab5bb018SKeith M Wesolowski 		 *
2634ab5bb018SKeith M Wesolowski 		 * +-----------------------+------+------+
2635ab5bb018SKeith M Wesolowski 		 * | Physical Package ID   |  MC  |  HT  |
2636ab5bb018SKeith M Wesolowski 		 * +-----------------------+------+------+
2637ab5bb018SKeith M Wesolowski 		 * <------- chipid -------->
2638ab5bb018SKeith M Wesolowski 		 * <------- coreid --------------->
2639ab5bb018SKeith M Wesolowski 		 *			   <--- clogid -->
2640ab5bb018SKeith M Wesolowski 		 *			   <------>
2641ab5bb018SKeith M Wesolowski 		 *			   pkgcoreid
2642ab5bb018SKeith M Wesolowski 		 *
2643ab5bb018SKeith M Wesolowski 		 * Where the number of bits necessary to
2644ab5bb018SKeith M Wesolowski 		 * represent MC and HT fields together equals
2645ab5bb018SKeith M Wesolowski 		 * to the minimum number of bits necessary to
2646ab5bb018SKeith M Wesolowski 		 * store the value of cpi->cpi_ncpu_per_chip.
2647ab5bb018SKeith M Wesolowski 		 * Of those bits, the MC part uses the number
2648ab5bb018SKeith M Wesolowski 		 * of bits necessary to store the value of
2649ab5bb018SKeith M Wesolowski 		 * cpi->cpi_ncore_per_chip.
2650ab5bb018SKeith M Wesolowski 		 */
2651ab5bb018SKeith M Wesolowski 		for (i = 1; i < ncpu_per_core; i <<= 1)
2652ab5bb018SKeith M Wesolowski 			coreid_shift++;
2653ab5bb018SKeith M Wesolowski 		cpi->cpi_coreid = cpi->cpi_apicid >> coreid_shift;
2654ab5bb018SKeith M Wesolowski 		cpi->cpi_pkgcoreid = cpi->cpi_clogid >> coreid_shift;
2655ab5bb018SKeith M Wesolowski 	} else if (is_x86_feature(feature, X86FSET_HTT)) {
2656ab5bb018SKeith M Wesolowski 		/*
2657ab5bb018SKeith M Wesolowski 		 * Single-core multi-threaded processors.
2658ab5bb018SKeith M Wesolowski 		 */
2659ab5bb018SKeith M Wesolowski 		cpi->cpi_coreid = cpi->cpi_chipid;
2660ab5bb018SKeith M Wesolowski 		cpi->cpi_pkgcoreid = 0;
2661ab5bb018SKeith M Wesolowski 	} else {
2662ab5bb018SKeith M Wesolowski 		/*
2663ab5bb018SKeith M Wesolowski 		 * Single-core single-thread processors.
2664ab5bb018SKeith M Wesolowski 		 */
2665ab5bb018SKeith M Wesolowski 		cpi->cpi_coreid = cpu->cpu_id;
2666ab5bb018SKeith M Wesolowski 		cpi->cpi_pkgcoreid = 0;
2667ab5bb018SKeith M Wesolowski 	}
2668ab5bb018SKeith M Wesolowski 	cpi->cpi_procnodeid = cpi->cpi_chipid;
2669ab5bb018SKeith M Wesolowski 	cpi->cpi_compunitid = cpi->cpi_coreid;
2670ab5bb018SKeith M Wesolowski }
2671ab5bb018SKeith M Wesolowski 
2672ab5bb018SKeith M Wesolowski /*
2673ab5bb018SKeith M Wesolowski  * Historically, AMD has had CMP chips with only a single thread per core.
2674ab5bb018SKeith M Wesolowski  * However, starting in family 17h (Zen), this has changed and they now have
2675ab5bb018SKeith M Wesolowski  * multiple threads. Our internal core id needs to be a unique value.
2676ab5bb018SKeith M Wesolowski  *
2677ab5bb018SKeith M Wesolowski  * To determine the core id of an AMD system, if we're from a family before 17h,
2678ab5bb018SKeith M Wesolowski  * then we just use the cpu id, as that gives us a good value that will be
2679ab5bb018SKeith M Wesolowski  * unique for each core. If instead, we're on family 17h or later, then we need
2680ab5bb018SKeith M Wesolowski  * to do something more complicated. CPUID leaf 0x8000001e can tell us
2681ab5bb018SKeith M Wesolowski  * how many threads are in the system. Based on that, we'll shift the APIC ID.
2682ab5bb018SKeith M Wesolowski  * We can't use the normal core id in that leaf as it's only unique within the
2683ab5bb018SKeith M Wesolowski  * socket, which is perfect for cpi_pkgcoreid, but not us.
2684ab5bb018SKeith M Wesolowski  */
2685ab5bb018SKeith M Wesolowski static id_t
cpuid_amd_get_coreid(cpu_t * cpu)2686ab5bb018SKeith M Wesolowski cpuid_amd_get_coreid(cpu_t *cpu)
2687ab5bb018SKeith M Wesolowski {
2688ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
2689ab5bb018SKeith M Wesolowski 
2690ab5bb018SKeith M Wesolowski 	if (cpi->cpi_family >= 0x17 &&
2691ab5bb018SKeith M Wesolowski 	    is_x86_feature(x86_featureset, X86FSET_TOPOEXT) &&
2692ab5bb018SKeith M Wesolowski 	    cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) {
2693ab5bb018SKeith M Wesolowski 		uint_t nthreads = BITX(cpi->cpi_extd[0x1e].cp_ebx, 15, 8) + 1;
2694ab5bb018SKeith M Wesolowski 		if (nthreads > 1) {
2695ab5bb018SKeith M Wesolowski 			VERIFY3U(nthreads, ==, 2);
2696ab5bb018SKeith M Wesolowski 			return (cpi->cpi_apicid >> 1);
2697ab5bb018SKeith M Wesolowski 		}
2698ab5bb018SKeith M Wesolowski 	}
2699ab5bb018SKeith M Wesolowski 
2700ab5bb018SKeith M Wesolowski 	return (cpu->cpu_id);
2701ab5bb018SKeith M Wesolowski }
2702ab5bb018SKeith M Wesolowski 
2703ab5bb018SKeith M Wesolowski /*
2704ab5bb018SKeith M Wesolowski  * IDs on AMD is a more challenging task. This is notable because of the
2705ab5bb018SKeith M Wesolowski  * following two facts:
2706ab5bb018SKeith M Wesolowski  *
2707ab5bb018SKeith M Wesolowski  *  1. Before family 0x17 (Zen), there was no support for SMT and there was
2708ab5bb018SKeith M Wesolowski  *     also no way to get an actual unique core id from the system. As such, we
2709ab5bb018SKeith M Wesolowski  *     synthesize this case by using cpu->cpu_id.  This scheme does not,
2710ab5bb018SKeith M Wesolowski  *     however, guarantee that sibling cores of a chip will have sequential
2711ab5bb018SKeith M Wesolowski  *     coreids starting at a multiple of the number of cores per chip - that is
2712ab5bb018SKeith M Wesolowski  *     usually the case, but if the APIC IDs have been set up in a different
2713ab5bb018SKeith M Wesolowski  *     order then we need to perform a few more gymnastics for the pkgcoreid.
2714ab5bb018SKeith M Wesolowski  *
2715ab5bb018SKeith M Wesolowski  *  2. In families 0x15 and 16x (Bulldozer and co.) the cores came in groups
2716ab5bb018SKeith M Wesolowski  *     called compute units. These compute units share the L1I cache, L2 cache,
2717ab5bb018SKeith M Wesolowski  *     and the FPU. To deal with this, a new topology leaf was added in
2718ab5bb018SKeith M Wesolowski  *     0x8000001e. However, parts of this leaf have different meanings
2719ab5bb018SKeith M Wesolowski  *     once we get to family 0x17.
2720ab5bb018SKeith M Wesolowski  */
2721ab5bb018SKeith M Wesolowski 
2722ab5bb018SKeith M Wesolowski static void
cpuid_amd_getids(cpu_t * cpu,uchar_t * features)2723ab5bb018SKeith M Wesolowski cpuid_amd_getids(cpu_t *cpu, uchar_t *features)
2724ab5bb018SKeith M Wesolowski {
2725ab5bb018SKeith M Wesolowski 	int i, first_half, coreidsz;
2726ab5bb018SKeith M Wesolowski 	uint32_t nb_caps_reg;
2727ab5bb018SKeith M Wesolowski 	uint_t node2_1;
2728ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
2729ab5bb018SKeith M Wesolowski 	struct cpuid_regs *cp;
2730ab5bb018SKeith M Wesolowski 
2731ab5bb018SKeith M Wesolowski 	/*
2732ab5bb018SKeith M Wesolowski 	 * Calculate the core id (this comes from hardware in family 0x17 if it
2733ab5bb018SKeith M Wesolowski 	 * hasn't been stripped by virtualization). We always set the compute
2734ab5bb018SKeith M Wesolowski 	 * unit id to the same value. Also, initialize the default number of
2735ab5bb018SKeith M Wesolowski 	 * cores per compute unit and nodes per package. This will be
2736ab5bb018SKeith M Wesolowski 	 * overwritten when we know information about a particular family.
2737ab5bb018SKeith M Wesolowski 	 */
2738ab5bb018SKeith M Wesolowski 	cpi->cpi_coreid = cpuid_amd_get_coreid(cpu);
2739ab5bb018SKeith M Wesolowski 	cpi->cpi_compunitid = cpi->cpi_coreid;
2740ab5bb018SKeith M Wesolowski 	cpi->cpi_cores_per_compunit = 1;
2741ab5bb018SKeith M Wesolowski 	cpi->cpi_procnodes_per_pkg = 1;
2742ab5bb018SKeith M Wesolowski 
2743ab5bb018SKeith M Wesolowski 	/*
2744ab5bb018SKeith M Wesolowski 	 * To construct the logical ID, we need to determine how many APIC IDs
2745ab5bb018SKeith M Wesolowski 	 * are dedicated to the cores and threads. This is provided for us in
2746ab5bb018SKeith M Wesolowski 	 * 0x80000008. However, if it's not present (say due to virtualization),
2747ab5bb018SKeith M Wesolowski 	 * then we assume it's one. This should be present on all 64-bit AMD
2748ab5bb018SKeith M Wesolowski 	 * processors.  It was added in family 0xf (Hammer).
2749ab5bb018SKeith M Wesolowski 	 */
2750ab5bb018SKeith M Wesolowski 	if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) {
2751ab5bb018SKeith M Wesolowski 		coreidsz = BITX((cpi)->cpi_extd[8].cp_ecx, 15, 12);
2752ab5bb018SKeith M Wesolowski 
2753ab5bb018SKeith M Wesolowski 		/*
2754ab5bb018SKeith M Wesolowski 		 * In AMD parlance chip is really a node while illumos
2755ab5bb018SKeith M Wesolowski 		 * uses chip as equivalent to socket/package.
2756ab5bb018SKeith M Wesolowski 		 */
2757ab5bb018SKeith M Wesolowski 		if (coreidsz == 0) {
2758ab5bb018SKeith M Wesolowski 			/* Use legacy method */
2759ab5bb018SKeith M Wesolowski 			for (i = 1; i < cpi->cpi_ncore_per_chip; i <<= 1)
2760ab5bb018SKeith M Wesolowski 				coreidsz++;
2761ab5bb018SKeith M Wesolowski 			if (coreidsz == 0)
2762ab5bb018SKeith M Wesolowski 				coreidsz = 1;
2763ab5bb018SKeith M Wesolowski 		}
2764ab5bb018SKeith M Wesolowski 	} else {
2765ab5bb018SKeith M Wesolowski 		/* Assume single-core part */
2766ab5bb018SKeith M Wesolowski 		coreidsz = 1;
2767ab5bb018SKeith M Wesolowski 	}
2768ab5bb018SKeith M Wesolowski 	cpi->cpi_clogid = cpi->cpi_apicid & ((1 << coreidsz) - 1);
2769ab5bb018SKeith M Wesolowski 
2770ab5bb018SKeith M Wesolowski 	/*
2771ab5bb018SKeith M Wesolowski 	 * The package core ID varies depending on the family. While it may be
2772ab5bb018SKeith M Wesolowski 	 * tempting to use the CPUID_LEAF_EXT_1e %ebx core id, unfortunately,
2773ab5bb018SKeith M Wesolowski 	 * this value is the core id in the given node. For non-virtualized
2774ab5bb018SKeith M Wesolowski 	 * family 17h, we need to take the logical core id and shift off the
2775ab5bb018SKeith M Wesolowski 	 * threads like we do when getting the core id.  Otherwise, we can use
2776ab5bb018SKeith M Wesolowski 	 * the clogid as is. When family 17h is virtualized, the clogid should
2777ab5bb018SKeith M Wesolowski 	 * be sufficient as if we don't have valid data in the leaf, then we
2778ab5bb018SKeith M Wesolowski 	 * won't think we have SMT, in which case the cpi_clogid should be
2779ab5bb018SKeith M Wesolowski 	 * sufficient.
2780ab5bb018SKeith M Wesolowski 	 */
2781ab5bb018SKeith M Wesolowski 	if (cpi->cpi_family >= 0x17 &&
2782ab5bb018SKeith M Wesolowski 	    is_x86_feature(x86_featureset, X86FSET_TOPOEXT) &&
2783ab5bb018SKeith M Wesolowski 	    cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e &&
2784ab5bb018SKeith M Wesolowski 	    cpi->cpi_extd[0x1e].cp_ebx != 0) {
2785ab5bb018SKeith M Wesolowski 		uint_t nthreads = BITX(cpi->cpi_extd[0x1e].cp_ebx, 15, 8) + 1;
2786ab5bb018SKeith M Wesolowski 		if (nthreads > 1) {
2787ab5bb018SKeith M Wesolowski 			VERIFY3U(nthreads, ==, 2);
2788ab5bb018SKeith M Wesolowski 			cpi->cpi_pkgcoreid = cpi->cpi_clogid >> 1;
2789ab5bb018SKeith M Wesolowski 		} else {
2790ab5bb018SKeith M Wesolowski 			cpi->cpi_pkgcoreid = cpi->cpi_clogid;
2791ab5bb018SKeith M Wesolowski 		}
2792ab5bb018SKeith M Wesolowski 	} else {
2793ab5bb018SKeith M Wesolowski 		cpi->cpi_pkgcoreid = cpi->cpi_clogid;
2794ab5bb018SKeith M Wesolowski 	}
2795ab5bb018SKeith M Wesolowski 
2796ab5bb018SKeith M Wesolowski 	/*
2797ab5bb018SKeith M Wesolowski 	 * Obtain the node ID and compute unit IDs. If we're on family 0x15
2798ab5bb018SKeith M Wesolowski 	 * (bulldozer) or newer, then we can derive all of this from leaf
2799ab5bb018SKeith M Wesolowski 	 * CPUID_LEAF_EXT_1e. Otherwise, the method varies by family.
2800ab5bb018SKeith M Wesolowski 	 */
2801ab5bb018SKeith M Wesolowski 	if (is_x86_feature(x86_featureset, X86FSET_TOPOEXT) &&
2802ab5bb018SKeith M Wesolowski 	    cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1e) {
2803ab5bb018SKeith M Wesolowski 		cp = &cpi->cpi_extd[0x1e];
2804ab5bb018SKeith M Wesolowski 
2805ab5bb018SKeith M Wesolowski 		cpi->cpi_procnodes_per_pkg = BITX(cp->cp_ecx, 10, 8) + 1;
2806ab5bb018SKeith M Wesolowski 		cpi->cpi_procnodeid = BITX(cp->cp_ecx, 7, 0);
2807ab5bb018SKeith M Wesolowski 
2808ab5bb018SKeith M Wesolowski 		/*
2809ab5bb018SKeith M Wesolowski 		 * For Bulldozer-era CPUs, recalculate the compute unit
2810ab5bb018SKeith M Wesolowski 		 * information.
2811ab5bb018SKeith M Wesolowski 		 */
2812ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family >= 0x15 && cpi->cpi_family < 0x17) {
2813ab5bb018SKeith M Wesolowski 			cpi->cpi_cores_per_compunit =
2814ab5bb018SKeith M Wesolowski 			    BITX(cp->cp_ebx, 15, 8) + 1;
2815ab5bb018SKeith M Wesolowski 			cpi->cpi_compunitid = BITX(cp->cp_ebx, 7, 0) +
2816ab5bb018SKeith M Wesolowski 			    (cpi->cpi_ncore_per_chip /
2817ab5bb018SKeith M Wesolowski 			    cpi->cpi_cores_per_compunit) *
2818ab5bb018SKeith M Wesolowski 			    (cpi->cpi_procnodeid /
2819ab5bb018SKeith M Wesolowski 			    cpi->cpi_procnodes_per_pkg);
2820ab5bb018SKeith M Wesolowski 		}
2821ab5bb018SKeith M Wesolowski 	} else if (cpi->cpi_family == 0xf || cpi->cpi_family >= 0x11) {
2822ab5bb018SKeith M Wesolowski 		cpi->cpi_procnodeid = (cpi->cpi_apicid >> coreidsz) & 7;
2823ab5bb018SKeith M Wesolowski 	} else if (cpi->cpi_family == 0x10) {
2824ab5bb018SKeith M Wesolowski 		/*
2825ab5bb018SKeith M Wesolowski 		 * See if we are a multi-node processor.
2826ab5bb018SKeith M Wesolowski 		 * All processors in the system have the same number of nodes
2827ab5bb018SKeith M Wesolowski 		 */
2828ab5bb018SKeith M Wesolowski 		nb_caps_reg =  pci_getl_func(0, 24, 3, 0xe8);
2829ab5bb018SKeith M Wesolowski 		if ((cpi->cpi_model < 8) || BITX(nb_caps_reg, 29, 29) == 0) {
2830ab5bb018SKeith M Wesolowski 			/* Single-node */
2831ab5bb018SKeith M Wesolowski 			cpi->cpi_procnodeid = BITX(cpi->cpi_apicid, 5,
2832ab5bb018SKeith M Wesolowski 			    coreidsz);
2833ab5bb018SKeith M Wesolowski 		} else {
2834ab5bb018SKeith M Wesolowski 
2835ab5bb018SKeith M Wesolowski 			/*
2836ab5bb018SKeith M Wesolowski 			 * Multi-node revision D (2 nodes per package
2837ab5bb018SKeith M Wesolowski 			 * are supported)
2838ab5bb018SKeith M Wesolowski 			 */
2839ab5bb018SKeith M Wesolowski 			cpi->cpi_procnodes_per_pkg = 2;
2840ab5bb018SKeith M Wesolowski 
2841ab5bb018SKeith M Wesolowski 			first_half = (cpi->cpi_pkgcoreid <=
2842ab5bb018SKeith M Wesolowski 			    (cpi->cpi_ncore_per_chip/2 - 1));
2843ab5bb018SKeith M Wesolowski 
2844ab5bb018SKeith M Wesolowski 			if (cpi->cpi_apicid == cpi->cpi_pkgcoreid) {
2845ab5bb018SKeith M Wesolowski 				/* We are BSP */
2846ab5bb018SKeith M Wesolowski 				cpi->cpi_procnodeid = (first_half ? 0 : 1);
2847ab5bb018SKeith M Wesolowski 			} else {
2848ab5bb018SKeith M Wesolowski 
2849ab5bb018SKeith M Wesolowski 				/* We are AP */
2850ab5bb018SKeith M Wesolowski 				/* NodeId[2:1] bits to use for reading F3xe8 */
2851ab5bb018SKeith M Wesolowski 				node2_1 = BITX(cpi->cpi_apicid, 5, 4) << 1;
2852ab5bb018SKeith M Wesolowski 
2853ab5bb018SKeith M Wesolowski 				nb_caps_reg =
2854ab5bb018SKeith M Wesolowski 				    pci_getl_func(0, 24 + node2_1, 3, 0xe8);
2855ab5bb018SKeith M Wesolowski 
2856ab5bb018SKeith M Wesolowski 				/*
2857ab5bb018SKeith M Wesolowski 				 * Check IntNodeNum bit (31:30, but bit 31 is
2858ab5bb018SKeith M Wesolowski 				 * always 0 on dual-node processors)
2859ab5bb018SKeith M Wesolowski 				 */
2860ab5bb018SKeith M Wesolowski 				if (BITX(nb_caps_reg, 30, 30) == 0)
2861ab5bb018SKeith M Wesolowski 					cpi->cpi_procnodeid = node2_1 +
2862ab5bb018SKeith M Wesolowski 					    !first_half;
2863ab5bb018SKeith M Wesolowski 				else
2864ab5bb018SKeith M Wesolowski 					cpi->cpi_procnodeid = node2_1 +
2865ab5bb018SKeith M Wesolowski 					    first_half;
2866ab5bb018SKeith M Wesolowski 			}
2867ab5bb018SKeith M Wesolowski 		}
2868ab5bb018SKeith M Wesolowski 	} else {
2869ab5bb018SKeith M Wesolowski 		cpi->cpi_procnodeid = 0;
2870ab5bb018SKeith M Wesolowski 	}
2871ab5bb018SKeith M Wesolowski 
2872ab5bb018SKeith M Wesolowski 	cpi->cpi_chipid =
2873ab5bb018SKeith M Wesolowski 	    cpi->cpi_procnodeid / cpi->cpi_procnodes_per_pkg;
2874ab5bb018SKeith M Wesolowski 
2875ab5bb018SKeith M Wesolowski 	cpi->cpi_ncore_bits = coreidsz;
2876ab5bb018SKeith M Wesolowski 	cpi->cpi_nthread_bits = ddi_fls(cpi->cpi_ncpu_per_chip /
2877ab5bb018SKeith M Wesolowski 	    cpi->cpi_ncore_per_chip);
2878ab5bb018SKeith M Wesolowski }
2879ab5bb018SKeith M Wesolowski 
2880ab5bb018SKeith M Wesolowski static void
spec_uarch_flush_noop(void)2881ab5bb018SKeith M Wesolowski spec_uarch_flush_noop(void)
2882ab5bb018SKeith M Wesolowski {
2883ab5bb018SKeith M Wesolowski }
2884ab5bb018SKeith M Wesolowski 
2885ab5bb018SKeith M Wesolowski /*
2886ab5bb018SKeith M Wesolowski  * When microcode is present that mitigates MDS, this wrmsr will also flush the
2887ab5bb018SKeith M Wesolowski  * MDS-related micro-architectural state that would normally happen by calling
2888ab5bb018SKeith M Wesolowski  * x86_md_clear().
2889ab5bb018SKeith M Wesolowski  */
2890ab5bb018SKeith M Wesolowski static void
spec_uarch_flush_msr(void)2891ab5bb018SKeith M Wesolowski spec_uarch_flush_msr(void)
2892ab5bb018SKeith M Wesolowski {
2893ab5bb018SKeith M Wesolowski 	wrmsr(MSR_IA32_FLUSH_CMD, IA32_FLUSH_CMD_L1D);
2894ab5bb018SKeith M Wesolowski }
2895ab5bb018SKeith M Wesolowski 
2896ab5bb018SKeith M Wesolowski /*
2897ab5bb018SKeith M Wesolowski  * This function points to a function that will flush certain
2898ab5bb018SKeith M Wesolowski  * micro-architectural state on the processor. This flush is used to mitigate
28995cd084edSDan McDonald  * three different classes of Intel CPU vulnerabilities: L1TF, MDS, and RFDS.
29005cd084edSDan McDonald  * This function can point to one of three functions:
2901ab5bb018SKeith M Wesolowski  *
2902ab5bb018SKeith M Wesolowski  * - A noop which is done because we either are vulnerable, but do not have
2903ab5bb018SKeith M Wesolowski  *   microcode available to help deal with a fix, or because we aren't
2904ab5bb018SKeith M Wesolowski  *   vulnerable.
2905ab5bb018SKeith M Wesolowski  *
2906ab5bb018SKeith M Wesolowski  * - spec_uarch_flush_msr which will issue an L1D flush and if microcode to
2907ab5bb018SKeith M Wesolowski  *   mitigate MDS is present, also perform the equivalent of the MDS flush;
2908ab5bb018SKeith M Wesolowski  *   however, it only flushes the MDS related micro-architectural state on the
2909ab5bb018SKeith M Wesolowski  *   current hyperthread, it does not do anything for the twin.
2910ab5bb018SKeith M Wesolowski  *
2911ab5bb018SKeith M Wesolowski  * - x86_md_clear which will flush the MDS related state. This is done when we
2912ab5bb018SKeith M Wesolowski  *   have a processor that is vulnerable to MDS, but is not vulnerable to L1TF
29135cd084edSDan McDonald  *   (RDCL_NO is set); or if the CPU is vulnerable to RFDS and indicates VERW
29145cd084edSDan McDonald  *   can clear it (RFDS_CLEAR is set).
2915ab5bb018SKeith M Wesolowski  */
2916ab5bb018SKeith M Wesolowski void (*spec_uarch_flush)(void) = spec_uarch_flush_noop;
2917ab5bb018SKeith M Wesolowski 
2918ab5bb018SKeith M Wesolowski static void
cpuid_update_md_clear(cpu_t * cpu,uchar_t * featureset)2919ab5bb018SKeith M Wesolowski cpuid_update_md_clear(cpu_t *cpu, uchar_t *featureset)
2920ab5bb018SKeith M Wesolowski {
2921ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
2922ab5bb018SKeith M Wesolowski 
29235cd084edSDan McDonald 	/* Non-Intel doesn't concern us here. */
29245cd084edSDan McDonald 	if (cpi->cpi_vendor != X86_VENDOR_Intel)
29255cd084edSDan McDonald 		return;
29265cd084edSDan McDonald 
2927ab5bb018SKeith M Wesolowski 	/*
2928ab5bb018SKeith M Wesolowski 	 * While RDCL_NO indicates that one of the MDS vulnerabilities (MSBDS)
2929ab5bb018SKeith M Wesolowski 	 * has been fixed in hardware, it doesn't cover everything related to
2930ab5bb018SKeith M Wesolowski 	 * MDS. Therefore we can only rely on MDS_NO to determine that we don't
2931ab5bb018SKeith M Wesolowski 	 * need to mitigate this.
29325cd084edSDan McDonald 	 *
29335cd084edSDan McDonald 	 * We must ALSO check the case of RFDS_NO and if RFDS_CLEAR is set,
29345cd084edSDan McDonald 	 * because of the small cases of RFDS.
2935ab5bb018SKeith M Wesolowski 	 */
2936ab5bb018SKeith M Wesolowski 
29375cd084edSDan McDonald 	if ((!is_x86_feature(featureset, X86FSET_MDS_NO) &&
29385cd084edSDan McDonald 	    is_x86_feature(featureset, X86FSET_MD_CLEAR)) ||
29395cd084edSDan McDonald 	    (!is_x86_feature(featureset, X86FSET_RFDS_NO) &&
29405cd084edSDan McDonald 	    is_x86_feature(featureset, X86FSET_RFDS_CLEAR))) {
2941ab5bb018SKeith M Wesolowski 		const uint8_t nop = NOP_INSTR;
2942ab5bb018SKeith M Wesolowski 		uint8_t *md = (uint8_t *)x86_md_clear;
2943ab5bb018SKeith M Wesolowski 
2944ab5bb018SKeith M Wesolowski 		*md = nop;
2945ab5bb018SKeith M Wesolowski 	}
2946ab5bb018SKeith M Wesolowski 
2947ab5bb018SKeith M Wesolowski 	membar_producer();
2948ab5bb018SKeith M Wesolowski }
2949ab5bb018SKeith M Wesolowski 
2950ab5bb018SKeith M Wesolowski static void
cpuid_update_l1d_flush(cpu_t * cpu,uchar_t * featureset)2951ab5bb018SKeith M Wesolowski cpuid_update_l1d_flush(cpu_t *cpu, uchar_t *featureset)
2952ab5bb018SKeith M Wesolowski {
29535cd084edSDan McDonald 	boolean_t need_l1d, need_mds, need_rfds;
2954ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
2955ab5bb018SKeith M Wesolowski 
2956ab5bb018SKeith M Wesolowski 	/*
29575cd084edSDan McDonald 	 * If we're not on Intel or we've mitigated all of RDCL, MDS, and RFDS
29585cd084edSDan McDonald 	 * in hardware, then there's nothing left for us to do for enabling
29595cd084edSDan McDonald 	 * the flush. We can also go ahead and say that SMT exclusion is
2960ab5bb018SKeith M Wesolowski 	 * unnecessary.
2961ab5bb018SKeith M Wesolowski 	 */
2962ab5bb018SKeith M Wesolowski 	if (cpi->cpi_vendor != X86_VENDOR_Intel ||
2963ab5bb018SKeith M Wesolowski 	    (is_x86_feature(featureset, X86FSET_RDCL_NO) &&
29645cd084edSDan McDonald 	    is_x86_feature(featureset, X86FSET_MDS_NO) &&
29655cd084edSDan McDonald 	    is_x86_feature(featureset, X86FSET_RFDS_NO))) {
2966ab5bb018SKeith M Wesolowski 		extern int smt_exclusion;
2967ab5bb018SKeith M Wesolowski 		smt_exclusion = 0;
2968ab5bb018SKeith M Wesolowski 		spec_uarch_flush = spec_uarch_flush_noop;
2969ab5bb018SKeith M Wesolowski 		membar_producer();
2970ab5bb018SKeith M Wesolowski 		return;
2971ab5bb018SKeith M Wesolowski 	}
2972ab5bb018SKeith M Wesolowski 
2973ab5bb018SKeith M Wesolowski 	/*
2974ab5bb018SKeith M Wesolowski 	 * The locations where we need to perform an L1D flush are required both
2975ab5bb018SKeith M Wesolowski 	 * for mitigating L1TF and MDS. When verw support is present in
2976ab5bb018SKeith M Wesolowski 	 * microcode, then the L1D flush will take care of doing that as well.
2977ab5bb018SKeith M Wesolowski 	 * However, if we have a system where RDCL_NO is present, but we don't
2978ab5bb018SKeith M Wesolowski 	 * have MDS_NO, then we need to do a verw (x86_md_clear) and not a full
2979ab5bb018SKeith M Wesolowski 	 * L1D flush.
2980ab5bb018SKeith M Wesolowski 	 */
2981ab5bb018SKeith M Wesolowski 	if (!is_x86_feature(featureset, X86FSET_RDCL_NO) &&
2982ab5bb018SKeith M Wesolowski 	    is_x86_feature(featureset, X86FSET_FLUSH_CMD) &&
2983ab5bb018SKeith M Wesolowski 	    !is_x86_feature(featureset, X86FSET_L1D_VM_NO)) {
2984ab5bb018SKeith M Wesolowski 		need_l1d = B_TRUE;
2985ab5bb018SKeith M Wesolowski 	} else {
2986ab5bb018SKeith M Wesolowski 		need_l1d = B_FALSE;
2987ab5bb018SKeith M Wesolowski 	}
2988ab5bb018SKeith M Wesolowski 
2989ab5bb018SKeith M Wesolowski 	if (!is_x86_feature(featureset, X86FSET_MDS_NO) &&
2990ab5bb018SKeith M Wesolowski 	    is_x86_feature(featureset, X86FSET_MD_CLEAR)) {
2991ab5bb018SKeith M Wesolowski 		need_mds = B_TRUE;
2992ab5bb018SKeith M Wesolowski 	} else {
2993ab5bb018SKeith M Wesolowski 		need_mds = B_FALSE;
2994ab5bb018SKeith M Wesolowski 	}
2995ab5bb018SKeith M Wesolowski 
29965cd084edSDan McDonald 	if (!is_x86_feature(featureset, X86FSET_RFDS_NO) &&
29975cd084edSDan McDonald 	    is_x86_feature(featureset, X86FSET_RFDS_CLEAR)) {
29985cd084edSDan McDonald 		need_rfds = B_TRUE;
29995cd084edSDan McDonald 	} else {
30005cd084edSDan McDonald 		need_rfds = B_FALSE;
30015cd084edSDan McDonald 	}
30025cd084edSDan McDonald 
3003ab5bb018SKeith M Wesolowski 	if (need_l1d) {
30045cd084edSDan McDonald 		/*
30055cd084edSDan McDonald 		 * As of Feb, 2024, no CPU needs L1D *and* RFDS mitigation
30065cd084edSDan McDonald 		 * together. If the following VERIFY trips, we need to add
30075cd084edSDan McDonald 		 * further fixes here.
30085cd084edSDan McDonald 		 */
30095cd084edSDan McDonald 		VERIFY(!need_rfds);
3010ab5bb018SKeith M Wesolowski 		spec_uarch_flush = spec_uarch_flush_msr;
30115cd084edSDan McDonald 	} else if (need_mds || need_rfds) {
3012ab5bb018SKeith M Wesolowski 		spec_uarch_flush = x86_md_clear;
3013ab5bb018SKeith M Wesolowski 	} else {
3014ab5bb018SKeith M Wesolowski 		/*
3015ab5bb018SKeith M Wesolowski 		 * We have no hardware mitigations available to us.
3016ab5bb018SKeith M Wesolowski 		 */
3017ab5bb018SKeith M Wesolowski 		spec_uarch_flush = spec_uarch_flush_noop;
3018ab5bb018SKeith M Wesolowski 	}
3019ab5bb018SKeith M Wesolowski 	membar_producer();
3020ab5bb018SKeith M Wesolowski }
3021ab5bb018SKeith M Wesolowski 
3022ab5bb018SKeith M Wesolowski /*
30235a9c36deSDan McDonald  * Branch History Injection (BHI) mitigations.
30245a9c36deSDan McDonald  *
30255a9c36deSDan McDonald  * Intel has provided a software sequence that will scrub the BHB. Like RSB
30265a9c36deSDan McDonald  * (below) we can scribble a return at the beginning to avoid if if the CPU
30275a9c36deSDan McDonald  * is modern enough. We can also scribble a return if the CPU is old enough
30285a9c36deSDan McDonald  * to not have an RSB (pre-eIBRS).
30295a9c36deSDan McDonald  */
30305a9c36deSDan McDonald typedef enum {
30315a9c36deSDan McDonald 	X86_BHI_TOO_OLD_OR_DISABLED,	/* Pre-eIBRS or disabled */
30325a9c36deSDan McDonald 	X86_BHI_NEW_ENOUGH,		/* AMD, or Intel with BHI_NO set */
30335a9c36deSDan McDonald 	X86_BHI_DIS_S,			/* BHI_NO == 0, but BHI_DIS_S avail. */
30345a9c36deSDan McDonald 	/* NOTE: BHI_DIS_S above will still need the software sequence. */
30355a9c36deSDan McDonald 	X86_BHI_SOFTWARE_SEQUENCE,	/* Use software sequence */
30365a9c36deSDan McDonald } x86_native_bhi_mitigation_t;
30375a9c36deSDan McDonald 
30385a9c36deSDan McDonald x86_native_bhi_mitigation_t x86_bhi_mitigation = X86_BHI_SOFTWARE_SEQUENCE;
30395a9c36deSDan McDonald 
30405a9c36deSDan McDonald static void
cpuid_enable_bhi_dis_s(void)30415a9c36deSDan McDonald cpuid_enable_bhi_dis_s(void)
30425a9c36deSDan McDonald {
30435a9c36deSDan McDonald 	uint64_t val;
30445a9c36deSDan McDonald 
30455a9c36deSDan McDonald 	val = rdmsr(MSR_IA32_SPEC_CTRL);
30465a9c36deSDan McDonald 	val |= IA32_SPEC_CTRL_BHI_DIS_S;
30475a9c36deSDan McDonald 	wrmsr(MSR_IA32_SPEC_CTRL, val);
30485a9c36deSDan McDonald }
30495a9c36deSDan McDonald 
30505a9c36deSDan McDonald /*
30515a9c36deSDan McDonald  * This function scribbles RET into the first instruction of x86_bhb_clear()
30525a9c36deSDan McDonald  * if SPECTREV2 mitigations are disabled, the CPU is too old, the CPU is new
30535a9c36deSDan McDonald  * enough to fix (which includes non-Intel CPUs), or the CPU has an explicit
30545a9c36deSDan McDonald  * disable-Branch-History control.
30555a9c36deSDan McDonald  */
30565a9c36deSDan McDonald static x86_native_bhi_mitigation_t
cpuid_learn_and_patch_bhi(x86_spectrev2_mitigation_t v2mit,cpu_t * cpu,uchar_t * featureset)30575a9c36deSDan McDonald cpuid_learn_and_patch_bhi(x86_spectrev2_mitigation_t v2mit, cpu_t *cpu,
30585a9c36deSDan McDonald     uchar_t *featureset)
30595a9c36deSDan McDonald {
30605a9c36deSDan McDonald 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
30615a9c36deSDan McDonald 	const uint8_t ret = RET_INSTR;
30625a9c36deSDan McDonald 	uint8_t *bhb_clear = (uint8_t *)x86_bhb_clear;
30635a9c36deSDan McDonald 
30645a9c36deSDan McDonald 	ASSERT0(cpu->cpu_id);
30655a9c36deSDan McDonald 
30665a9c36deSDan McDonald 	/* First check for explicitly disabled... */
30675a9c36deSDan McDonald 	if (v2mit == X86_SPECTREV2_DISABLED) {
30685a9c36deSDan McDonald 		*bhb_clear = ret;
30695a9c36deSDan McDonald 		return (X86_BHI_TOO_OLD_OR_DISABLED);
30705a9c36deSDan McDonald 	}
30715a9c36deSDan McDonald 
30725a9c36deSDan McDonald 	/*
30735a9c36deSDan McDonald 	 * Then check for BHI_NO, which means the CPU doesn't have this bug,
30745a9c36deSDan McDonald 	 * or if it's non-Intel, in which case this mitigation mechanism
30755a9c36deSDan McDonald 	 * doesn't apply.
30765a9c36deSDan McDonald 	 */
30775a9c36deSDan McDonald 	if (cpi->cpi_vendor != X86_VENDOR_Intel ||
30785a9c36deSDan McDonald 	    is_x86_feature(featureset, X86FSET_BHI_NO)) {
30795a9c36deSDan McDonald 		*bhb_clear = ret;
30805a9c36deSDan McDonald 		return (X86_BHI_NEW_ENOUGH);
30815a9c36deSDan McDonald 	}
30825a9c36deSDan McDonald 
30835a9c36deSDan McDonald 	/*
30845a9c36deSDan McDonald 	 * Now check for the BHI_CTRL MSR, and then set it if available.
30855a9c36deSDan McDonald 	 * We will still need to use the software sequence, however.
30865a9c36deSDan McDonald 	 */
30875a9c36deSDan McDonald 	if (is_x86_feature(featureset, X86FSET_BHI_CTRL)) {
30885a9c36deSDan McDonald 		cpuid_enable_bhi_dis_s();
30895a9c36deSDan McDonald 		return (X86_BHI_DIS_S);
30905a9c36deSDan McDonald 	}
30915a9c36deSDan McDonald 
30925a9c36deSDan McDonald 	/*
30935a9c36deSDan McDonald 	 * Finally, check if we are too old to bother with RSB:
30945a9c36deSDan McDonald 	 */
30955a9c36deSDan McDonald 	if (v2mit == X86_SPECTREV2_RETPOLINE) {
30965a9c36deSDan McDonald 		*bhb_clear = ret;
30975a9c36deSDan McDonald 		return (X86_BHI_TOO_OLD_OR_DISABLED);
30985a9c36deSDan McDonald 	}
30995a9c36deSDan McDonald 
31005a9c36deSDan McDonald 	ASSERT(*bhb_clear != ret);
31015a9c36deSDan McDonald 	return (X86_BHI_SOFTWARE_SEQUENCE);
31025a9c36deSDan McDonald }
31035a9c36deSDan McDonald 
31045a9c36deSDan McDonald /*
3105a6e309baSDan McDonald  * We default to enabling Return Stack Buffer (RSB) mitigations.
3106ab5bb018SKeith M Wesolowski  *
3107a6e309baSDan McDonald  * We used to skip RSB mitigations with Intel eIBRS, but developments around
3108a6e309baSDan McDonald  * post-barrier RSB (PBRSB) guessing suggests we should enable Intel RSB
3109a6e309baSDan McDonald  * mitigations always unless explicitly bypassed, or unless hardware indicates
3110a6e309baSDan McDonald  * the bug has been fixed.
3111651a12cbSRobert Mustacchi  *
3112a6e309baSDan McDonald  * The current decisions for using, or ignoring, a RSB software stuffing
3113a6e309baSDan McDonald  * sequence are expressed by the following table:
3114a6e309baSDan McDonald  *
3115a6e309baSDan McDonald  * +-------+------------+-----------------+--------+
3116a6e309baSDan McDonald  * | eIBRS |  PBRSB_NO  |  context switch | vmexit |
3117a6e309baSDan McDonald  * +-------+------------+-----------------+--------+
3118a6e309baSDan McDonald  * |   Yes |     No     |  stuff          | stuff  |
3119a6e309baSDan McDonald  * |   Yes |     Yes    |  ignore         | ignore |
3120a6e309baSDan McDonald  * |   No  |     No     |  stuff          | ignore |
3121a6e309baSDan McDonald  * +-------+------------+-----------------+--------+
3122a6e309baSDan McDonald  *
3123a6e309baSDan McDonald  * Note that if an Intel CPU has no eIBRS, it will never enumerate PBRSB_NO,
3124a6e309baSDan McDonald  * because machines with no eIBRS do not have a problem with PBRSB overflow.
3125a6e309baSDan McDonald  * See the Intel document cited below for details.
3126a6e309baSDan McDonald  *
3127a6e309baSDan McDonald  * Also note that AMD AUTO_IBRS has no PBRSB problem, so it is not included in
3128a6e309baSDan McDonald  * the table above, and that there is no situation where vmexit stuffing is
3129a6e309baSDan McDonald  * needed, but context-switch stuffing isn't.
3130a6e309baSDan McDonald  */
3131a6e309baSDan McDonald 
3132a6e309baSDan McDonald /* BEGIN CSTYLED */
3133a6e309baSDan McDonald /*
3134a6e309baSDan McDonald  * https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/post-barrier-return-stack-buffer-predictions.html
3135a6e309baSDan McDonald  */
3136a6e309baSDan McDonald /* END CSTYLED */
3137a6e309baSDan McDonald 
3138a6e309baSDan McDonald /*
3139651a12cbSRobert Mustacchi  * AMD indicates that when Automatic IBRS is enabled we do not need to implement
3140651a12cbSRobert Mustacchi  * return stack buffer clearing for VMEXIT as it takes care of it. The manual
3141651a12cbSRobert Mustacchi  * also states that as long as SMEP and we maintain at least one page between
3142651a12cbSRobert Mustacchi  * the kernel and user space (we have much more of a red zone), then we do not
3143651a12cbSRobert Mustacchi  * need to clear the RSB. We constrain this to only when Automatic IRBS is
3144651a12cbSRobert Mustacchi  * present.
3145ab5bb018SKeith M Wesolowski  */
3146ab5bb018SKeith M Wesolowski static void
cpuid_patch_rsb(x86_spectrev2_mitigation_t mit,bool intel_pbrsb_no)3147a6e309baSDan McDonald cpuid_patch_rsb(x86_spectrev2_mitigation_t mit, bool intel_pbrsb_no)
3148ab5bb018SKeith M Wesolowski {
3149ab5bb018SKeith M Wesolowski 	const uint8_t ret = RET_INSTR;
3150ab5bb018SKeith M Wesolowski 	uint8_t *stuff = (uint8_t *)x86_rsb_stuff;
3151a6e309baSDan McDonald 	uint8_t *vmx_stuff = (uint8_t *)x86_rsb_stuff_vmexit;
3152ab5bb018SKeith M Wesolowski 
3153ab5bb018SKeith M Wesolowski 	switch (mit) {
3154651a12cbSRobert Mustacchi 	case X86_SPECTREV2_AUTO_IBRS:
3155ab5bb018SKeith M Wesolowski 	case X86_SPECTREV2_DISABLED:
3156a6e309baSDan McDonald 		/* Don't bother with any RSB stuffing! */
3157ab5bb018SKeith M Wesolowski 		*stuff = ret;
3158a6e309baSDan McDonald 		*vmx_stuff = ret;
3159a6e309baSDan McDonald 		break;
3160a6e309baSDan McDonald 	case X86_SPECTREV2_RETPOLINE:
3161a6e309baSDan McDonald 		/*
3162a6e309baSDan McDonald 		 * The Intel document on Post-Barrier RSB says that processors
3163a6e309baSDan McDonald 		 * without eIBRS do not have PBRSB problems upon VMEXIT.
3164a6e309baSDan McDonald 		 */
3165a6e309baSDan McDonald 		VERIFY(!intel_pbrsb_no);
3166a6e309baSDan McDonald 		VERIFY3U(*stuff, !=, ret);
3167a6e309baSDan McDonald 		*vmx_stuff = ret;
3168ab5bb018SKeith M Wesolowski 		break;
3169ab5bb018SKeith M Wesolowski 	default:
3170a6e309baSDan McDonald 		/*
3171a6e309baSDan McDonald 		 * eIBRS is all that's left.  If CPU claims PBRSB is fixed,
3172a6e309baSDan McDonald 		 * don't use the RSB mitigation in either case.  Otherwise
3173a6e309baSDan McDonald 		 * both vmexit and context-switching require the software
3174a6e309baSDan McDonald 		 * mitigation.
3175a6e309baSDan McDonald 		 */
3176a6e309baSDan McDonald 		if (intel_pbrsb_no) {
3177a6e309baSDan McDonald 			/* CPU claims PBRSB problems are fixed. */
3178a6e309baSDan McDonald 			*stuff = ret;
3179a6e309baSDan McDonald 			*vmx_stuff = ret;
3180a6e309baSDan McDonald 		}
3181a6e309baSDan McDonald 		VERIFY3U(*stuff, ==, *vmx_stuff);
3182ab5bb018SKeith M Wesolowski 		break;
3183ab5bb018SKeith M Wesolowski 	}
3184ab5bb018SKeith M Wesolowski }
3185ab5bb018SKeith M Wesolowski 
3186ab5bb018SKeith M Wesolowski static void
cpuid_patch_retpolines(x86_spectrev2_mitigation_t mit)3187ab5bb018SKeith M Wesolowski cpuid_patch_retpolines(x86_spectrev2_mitigation_t mit)
3188ab5bb018SKeith M Wesolowski {
3189ab5bb018SKeith M Wesolowski 	const char *thunks[] = { "_rax", "_rbx", "_rcx", "_rdx", "_rdi",
3190ab5bb018SKeith M Wesolowski 	    "_rsi", "_rbp", "_r8", "_r9", "_r10", "_r11", "_r12", "_r13",
3191ab5bb018SKeith M Wesolowski 	    "_r14", "_r15" };
3192ab5bb018SKeith M Wesolowski 	const uint_t nthunks = ARRAY_SIZE(thunks);
3193ab5bb018SKeith M Wesolowski 	const char *type;
3194ab5bb018SKeith M Wesolowski 	uint_t i;
3195ab5bb018SKeith M Wesolowski 
3196ab5bb018SKeith M Wesolowski 	if (mit == x86_spectrev2_mitigation)
3197ab5bb018SKeith M Wesolowski 		return;
3198ab5bb018SKeith M Wesolowski 
3199ab5bb018SKeith M Wesolowski 	switch (mit) {
3200ab5bb018SKeith M Wesolowski 	case X86_SPECTREV2_RETPOLINE:
3201ab5bb018SKeith M Wesolowski 		type = "gen";
3202ab5bb018SKeith M Wesolowski 		break;
3203651a12cbSRobert Mustacchi 	case X86_SPECTREV2_AUTO_IBRS:
3204ab5bb018SKeith M Wesolowski 	case X86_SPECTREV2_ENHANCED_IBRS:
3205ab5bb018SKeith M Wesolowski 	case X86_SPECTREV2_DISABLED:
3206ab5bb018SKeith M Wesolowski 		type = "jmp";
3207ab5bb018SKeith M Wesolowski 		break;
3208ab5bb018SKeith M Wesolowski 	default:
3209651a12cbSRobert Mustacchi 		panic("asked to update retpoline state with unknown state!");
3210ab5bb018SKeith M Wesolowski 	}
3211ab5bb018SKeith M Wesolowski 
3212ab5bb018SKeith M Wesolowski 	for (i = 0; i < nthunks; i++) {
3213ab5bb018SKeith M Wesolowski 		uintptr_t source, dest;
3214ab5bb018SKeith M Wesolowski 		int ssize, dsize;
3215ab5bb018SKeith M Wesolowski 		char sourcebuf[64], destbuf[64];
3216ab5bb018SKeith M Wesolowski 
3217ab5bb018SKeith M Wesolowski 		(void) snprintf(destbuf, sizeof (destbuf),
3218ab5bb018SKeith M Wesolowski 		    "__x86_indirect_thunk%s", thunks[i]);
3219ab5bb018SKeith M Wesolowski 		(void) snprintf(sourcebuf, sizeof (sourcebuf),
3220ab5bb018SKeith M Wesolowski 		    "__x86_indirect_thunk_%s%s", type, thunks[i]);
3221ab5bb018SKeith M Wesolowski 
3222ab5bb018SKeith M Wesolowski 		source = kobj_getelfsym(sourcebuf, NULL, &ssize);
3223ab5bb018SKeith M Wesolowski 		dest = kobj_getelfsym(destbuf, NULL, &dsize);
3224ab5bb018SKeith M Wesolowski 		VERIFY3U(source, !=, 0);
3225ab5bb018SKeith M Wesolowski 		VERIFY3U(dest, !=, 0);
3226ab5bb018SKeith M Wesolowski 		VERIFY3S(dsize, >=, ssize);
3227ab5bb018SKeith M Wesolowski 		bcopy((void *)source, (void *)dest, ssize);
3228ab5bb018SKeith M Wesolowski 	}
3229ab5bb018SKeith M Wesolowski }
3230ab5bb018SKeith M Wesolowski 
3231ab5bb018SKeith M Wesolowski static void
cpuid_enable_enhanced_ibrs(void)3232ab5bb018SKeith M Wesolowski cpuid_enable_enhanced_ibrs(void)
3233ab5bb018SKeith M Wesolowski {
3234ab5bb018SKeith M Wesolowski 	uint64_t val;
3235ab5bb018SKeith M Wesolowski 
3236ab5bb018SKeith M Wesolowski 	val = rdmsr(MSR_IA32_SPEC_CTRL);
3237ab5bb018SKeith M Wesolowski 	val |= IA32_SPEC_CTRL_IBRS;
3238ab5bb018SKeith M Wesolowski 	wrmsr(MSR_IA32_SPEC_CTRL, val);
3239ab5bb018SKeith M Wesolowski }
3240ab5bb018SKeith M Wesolowski 
3241651a12cbSRobert Mustacchi static void
cpuid_enable_auto_ibrs(void)3242651a12cbSRobert Mustacchi cpuid_enable_auto_ibrs(void)
3243651a12cbSRobert Mustacchi {
3244651a12cbSRobert Mustacchi 	uint64_t val;
3245651a12cbSRobert Mustacchi 
3246651a12cbSRobert Mustacchi 	val = rdmsr(MSR_AMD_EFER);
3247651a12cbSRobert Mustacchi 	val |= AMD_EFER_AIBRSE;
3248651a12cbSRobert Mustacchi 	wrmsr(MSR_AMD_EFER, val);
3249651a12cbSRobert Mustacchi }
3250651a12cbSRobert Mustacchi 
3251ab5bb018SKeith M Wesolowski /*
3252ab5bb018SKeith M Wesolowski  * Determine how we should mitigate TAA or if we need to. Regardless of TAA, if
3253ab5bb018SKeith M Wesolowski  * we can disable TSX, we do so.
3254ab5bb018SKeith M Wesolowski  *
3255ab5bb018SKeith M Wesolowski  * This determination is done only on the boot CPU, potentially after loading
3256ab5bb018SKeith M Wesolowski  * updated microcode.
3257ab5bb018SKeith M Wesolowski  */
3258ab5bb018SKeith M Wesolowski static void
cpuid_update_tsx(cpu_t * cpu,uchar_t * featureset)3259ab5bb018SKeith M Wesolowski cpuid_update_tsx(cpu_t *cpu, uchar_t *featureset)
3260ab5bb018SKeith M Wesolowski {
3261ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
3262ab5bb018SKeith M Wesolowski 
3263ab5bb018SKeith M Wesolowski 	VERIFY(cpu->cpu_id == 0);
3264ab5bb018SKeith M Wesolowski 
3265ab5bb018SKeith M Wesolowski 	if (cpi->cpi_vendor != X86_VENDOR_Intel) {
3266ab5bb018SKeith M Wesolowski 		x86_taa_mitigation = X86_TAA_HW_MITIGATED;
3267ab5bb018SKeith M Wesolowski 		return;
3268ab5bb018SKeith M Wesolowski 	}
3269ab5bb018SKeith M Wesolowski 
3270ab5bb018SKeith M Wesolowski 	if (x86_disable_taa) {
3271ab5bb018SKeith M Wesolowski 		x86_taa_mitigation = X86_TAA_DISABLED;
3272ab5bb018SKeith M Wesolowski 		return;
3273ab5bb018SKeith M Wesolowski 	}
3274ab5bb018SKeith M Wesolowski 
3275ab5bb018SKeith M Wesolowski 	/*
3276ab5bb018SKeith M Wesolowski 	 * If we do not have the ability to disable TSX, then our only
3277ab5bb018SKeith M Wesolowski 	 * mitigation options are in hardware (TAA_NO), or by using our existing
3278ab5bb018SKeith M Wesolowski 	 * MDS mitigation as described above.  The latter relies upon us having
3279ab5bb018SKeith M Wesolowski 	 * configured MDS mitigations correctly! This includes disabling SMT if
3280ab5bb018SKeith M Wesolowski 	 * we want to cross-CPU-thread protection.
3281ab5bb018SKeith M Wesolowski 	 */
3282ab5bb018SKeith M Wesolowski 	if (!is_x86_feature(featureset, X86FSET_TSX_CTRL)) {
3283ab5bb018SKeith M Wesolowski 		/*
3284ab5bb018SKeith M Wesolowski 		 * It's not clear whether any parts will enumerate TAA_NO
3285ab5bb018SKeith M Wesolowski 		 * *without* TSX_CTRL, but let's mark it as such if we see this.
3286ab5bb018SKeith M Wesolowski 		 */
3287ab5bb018SKeith M Wesolowski 		if (is_x86_feature(featureset, X86FSET_TAA_NO)) {
3288ab5bb018SKeith M Wesolowski 			x86_taa_mitigation = X86_TAA_HW_MITIGATED;
3289ab5bb018SKeith M Wesolowski 			return;
3290ab5bb018SKeith M Wesolowski 		}
3291ab5bb018SKeith M Wesolowski 
3292ab5bb018SKeith M Wesolowski 		if (is_x86_feature(featureset, X86FSET_MD_CLEAR) &&
3293ab5bb018SKeith M Wesolowski 		    !is_x86_feature(featureset, X86FSET_MDS_NO)) {
3294ab5bb018SKeith M Wesolowski 			x86_taa_mitigation = X86_TAA_MD_CLEAR;
3295ab5bb018SKeith M Wesolowski 		} else {
3296ab5bb018SKeith M Wesolowski 			x86_taa_mitigation = X86_TAA_NOTHING;
3297ab5bb018SKeith M Wesolowski 		}
3298ab5bb018SKeith M Wesolowski 		return;
3299ab5bb018SKeith M Wesolowski 	}
3300ab5bb018SKeith M Wesolowski 
3301ab5bb018SKeith M Wesolowski 	/*
3302ab5bb018SKeith M Wesolowski 	 * We have TSX_CTRL, but we can only fully disable TSX if we're early
3303ab5bb018SKeith M Wesolowski 	 * enough in boot.
3304ab5bb018SKeith M Wesolowski 	 *
3305ab5bb018SKeith M Wesolowski 	 * Otherwise, we'll fall back to causing transactions to abort as our
3306ab5bb018SKeith M Wesolowski 	 * mitigation. TSX-using code will always take the fallback path.
3307ab5bb018SKeith M Wesolowski 	 */
3308ab5bb018SKeith M Wesolowski 	if (cpi->cpi_pass < 4) {
3309ab5bb018SKeith M Wesolowski 		x86_taa_mitigation = X86_TAA_TSX_DISABLE;
3310ab5bb018SKeith M Wesolowski 	} else {
3311ab5bb018SKeith M Wesolowski 		x86_taa_mitigation = X86_TAA_TSX_FORCE_ABORT;
3312ab5bb018SKeith M Wesolowski 	}
3313ab5bb018SKeith M Wesolowski }
3314ab5bb018SKeith M Wesolowski 
3315ab5bb018SKeith M Wesolowski /*
3316ab5bb018SKeith M Wesolowski  * As mentioned, we should only touch the MSR when we've got a suitable
3317ab5bb018SKeith M Wesolowski  * microcode loaded on this CPU.
3318ab5bb018SKeith M Wesolowski  */
3319ab5bb018SKeith M Wesolowski static void
cpuid_apply_tsx(x86_taa_mitigation_t taa,uchar_t * featureset)3320ab5bb018SKeith M Wesolowski cpuid_apply_tsx(x86_taa_mitigation_t taa, uchar_t *featureset)
3321ab5bb018SKeith M Wesolowski {
3322ab5bb018SKeith M Wesolowski 	uint64_t val;
3323ab5bb018SKeith M Wesolowski 
3324ab5bb018SKeith M Wesolowski 	switch (taa) {
3325ab5bb018SKeith M Wesolowski 	case X86_TAA_TSX_DISABLE:
3326ab5bb018SKeith M Wesolowski 		if (!is_x86_feature(featureset, X86FSET_TSX_CTRL))
3327ab5bb018SKeith M Wesolowski 			return;
3328ab5bb018SKeith M Wesolowski 		val = rdmsr(MSR_IA32_TSX_CTRL);
3329ab5bb018SKeith M Wesolowski 		val |= IA32_TSX_CTRL_CPUID_CLEAR | IA32_TSX_CTRL_RTM_DISABLE;
3330ab5bb018SKeith M Wesolowski 		wrmsr(MSR_IA32_TSX_CTRL, val);
3331ab5bb018SKeith M Wesolowski 		break;
3332ab5bb018SKeith M Wesolowski 	case X86_TAA_TSX_FORCE_ABORT:
3333ab5bb018SKeith M Wesolowski 		if (!is_x86_feature(featureset, X86FSET_TSX_CTRL))
3334ab5bb018SKeith M Wesolowski 			return;
3335ab5bb018SKeith M Wesolowski 		val = rdmsr(MSR_IA32_TSX_CTRL);
3336ab5bb018SKeith M Wesolowski 		val |= IA32_TSX_CTRL_RTM_DISABLE;
3337ab5bb018SKeith M Wesolowski 		wrmsr(MSR_IA32_TSX_CTRL, val);
3338ab5bb018SKeith M Wesolowski 		break;
3339ab5bb018SKeith M Wesolowski 	case X86_TAA_HW_MITIGATED:
3340ab5bb018SKeith M Wesolowski 	case X86_TAA_MD_CLEAR:
3341ab5bb018SKeith M Wesolowski 	case X86_TAA_DISABLED:
3342ab5bb018SKeith M Wesolowski 	case X86_TAA_NOTHING:
3343ab5bb018SKeith M Wesolowski 		break;
3344ab5bb018SKeith M Wesolowski 	}
3345ab5bb018SKeith M Wesolowski }
3346ab5bb018SKeith M Wesolowski 
3347ab5bb018SKeith M Wesolowski static void
cpuid_scan_security(cpu_t * cpu,uchar_t * featureset)3348ab5bb018SKeith M Wesolowski cpuid_scan_security(cpu_t *cpu, uchar_t *featureset)
3349ab5bb018SKeith M Wesolowski {
3350ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
3351ab5bb018SKeith M Wesolowski 	x86_spectrev2_mitigation_t v2mit;
3352ab5bb018SKeith M Wesolowski 
3353ab5bb018SKeith M Wesolowski 	if ((cpi->cpi_vendor == X86_VENDOR_AMD ||
3354ab5bb018SKeith M Wesolowski 	    cpi->cpi_vendor == X86_VENDOR_HYGON) &&
3355ab5bb018SKeith M Wesolowski 	    cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) {
3356ab5bb018SKeith M Wesolowski 		if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_IBPB)
3357ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_IBPB);
3358ab5bb018SKeith M Wesolowski 		if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_IBRS)
3359ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_IBRS);
3360ab5bb018SKeith M Wesolowski 		if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_STIBP)
3361ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_STIBP);
3362ab5bb018SKeith M Wesolowski 		if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_STIBP_ALL)
3363ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_STIBP_ALL);
3364ab5bb018SKeith M Wesolowski 		if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_SSBD)
3365ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SSBD);
3366ab5bb018SKeith M Wesolowski 		if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_VIRT_SSBD)
3367ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SSBD_VIRT);
3368ab5bb018SKeith M Wesolowski 		if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_SSB_NO)
3369ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SSB_NO);
3370651a12cbSRobert Mustacchi 
3371ab5bb018SKeith M Wesolowski 		/*
3372651a12cbSRobert Mustacchi 		 * Rather than Enhanced IBRS, AMD has a different feature that
3373651a12cbSRobert Mustacchi 		 * is a bit in EFER that can be enabled and will basically do
3374651a12cbSRobert Mustacchi 		 * the right thing while executing in the kernel.
3375ab5bb018SKeith M Wesolowski 		 */
3376651a12cbSRobert Mustacchi 		if (cpi->cpi_vendor == X86_VENDOR_AMD &&
3377651a12cbSRobert Mustacchi 		    (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_PREFER_IBRS) &&
3378651a12cbSRobert Mustacchi 		    cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_21 &&
3379651a12cbSRobert Mustacchi 		    (cpi->cpi_extd[0x21].cp_eax & CPUID_AMD_8X21_EAX_AIBRS)) {
3380651a12cbSRobert Mustacchi 			add_x86_feature(featureset, X86FSET_AUTO_IBRS);
3381ab5bb018SKeith M Wesolowski 		}
3382ab5bb018SKeith M Wesolowski 
3383ab5bb018SKeith M Wesolowski 	} else if (cpi->cpi_vendor == X86_VENDOR_Intel &&
3384ab5bb018SKeith M Wesolowski 	    cpi->cpi_maxeax >= 7) {
3385ab5bb018SKeith M Wesolowski 		struct cpuid_regs *ecp;
3386ab5bb018SKeith M Wesolowski 		ecp = &cpi->cpi_std[7];
3387ab5bb018SKeith M Wesolowski 
3388ab5bb018SKeith M Wesolowski 		if (ecp->cp_edx & CPUID_INTC_EDX_7_0_MD_CLEAR) {
3389ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_MD_CLEAR);
3390ab5bb018SKeith M Wesolowski 		}
3391ab5bb018SKeith M Wesolowski 
3392ab5bb018SKeith M Wesolowski 		if (ecp->cp_edx & CPUID_INTC_EDX_7_0_SPEC_CTRL) {
3393ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_IBRS);
3394ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_IBPB);
3395ab5bb018SKeith M Wesolowski 		}
3396ab5bb018SKeith M Wesolowski 
3397ab5bb018SKeith M Wesolowski 		if (ecp->cp_edx & CPUID_INTC_EDX_7_0_STIBP) {
3398ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_STIBP);
3399ab5bb018SKeith M Wesolowski 		}
3400ab5bb018SKeith M Wesolowski 
3401ab5bb018SKeith M Wesolowski 		/*
34025a9c36deSDan McDonald 		 * Some prediction controls are enumerated by subleaf 2 of
34035a9c36deSDan McDonald 		 * leaf 7.
34045a9c36deSDan McDonald 		 */
34055a9c36deSDan McDonald 		if (CPI_FEATURES_7_2_EDX(cpi) & CPUID_INTC_EDX_7_2_BHI_CTRL) {
34065a9c36deSDan McDonald 			add_x86_feature(featureset, X86FSET_BHI_CTRL);
34075a9c36deSDan McDonald 		}
34085a9c36deSDan McDonald 
34095a9c36deSDan McDonald 		/*
3410ab5bb018SKeith M Wesolowski 		 * Don't read the arch caps MSR on xpv where we lack the
3411ab5bb018SKeith M Wesolowski 		 * on_trap().
3412ab5bb018SKeith M Wesolowski 		 */
3413ab5bb018SKeith M Wesolowski #ifndef __xpv
3414ab5bb018SKeith M Wesolowski 		if (ecp->cp_edx & CPUID_INTC_EDX_7_0_ARCH_CAPS) {
3415ab5bb018SKeith M Wesolowski 			on_trap_data_t otd;
3416ab5bb018SKeith M Wesolowski 
3417ab5bb018SKeith M Wesolowski 			/*
3418ab5bb018SKeith M Wesolowski 			 * Be paranoid and assume we'll get a #GP.
3419ab5bb018SKeith M Wesolowski 			 */
3420ab5bb018SKeith M Wesolowski 			if (!on_trap(&otd, OT_DATA_ACCESS)) {
3421ab5bb018SKeith M Wesolowski 				uint64_t reg;
3422ab5bb018SKeith M Wesolowski 
3423ab5bb018SKeith M Wesolowski 				reg = rdmsr(MSR_IA32_ARCH_CAPABILITIES);
3424ab5bb018SKeith M Wesolowski 				if (reg & IA32_ARCH_CAP_RDCL_NO) {
3425ab5bb018SKeith M Wesolowski 					add_x86_feature(featureset,
3426ab5bb018SKeith M Wesolowski 					    X86FSET_RDCL_NO);
3427ab5bb018SKeith M Wesolowski 				}
3428ab5bb018SKeith M Wesolowski 				if (reg & IA32_ARCH_CAP_IBRS_ALL) {
3429ab5bb018SKeith M Wesolowski 					add_x86_feature(featureset,
3430ab5bb018SKeith M Wesolowski 					    X86FSET_IBRS_ALL);
3431ab5bb018SKeith M Wesolowski 				}
3432ab5bb018SKeith M Wesolowski 				if (reg & IA32_ARCH_CAP_RSBA) {
3433ab5bb018SKeith M Wesolowski 					add_x86_feature(featureset,
3434ab5bb018SKeith M Wesolowski 					    X86FSET_RSBA);
3435ab5bb018SKeith M Wesolowski 				}
3436ab5bb018SKeith M Wesolowski 				if (reg & IA32_ARCH_CAP_SKIP_L1DFL_VMENTRY) {
3437ab5bb018SKeith M Wesolowski 					add_x86_feature(featureset,
3438ab5bb018SKeith M Wesolowski 					    X86FSET_L1D_VM_NO);
3439ab5bb018SKeith M Wesolowski 				}
3440ab5bb018SKeith M Wesolowski 				if (reg & IA32_ARCH_CAP_SSB_NO) {
3441ab5bb018SKeith M Wesolowski 					add_x86_feature(featureset,
3442ab5bb018SKeith M Wesolowski 					    X86FSET_SSB_NO);
3443ab5bb018SKeith M Wesolowski 				}
3444ab5bb018SKeith M Wesolowski 				if (reg & IA32_ARCH_CAP_MDS_NO) {
3445ab5bb018SKeith M Wesolowski 					add_x86_feature(featureset,
3446ab5bb018SKeith M Wesolowski 					    X86FSET_MDS_NO);
3447ab5bb018SKeith M Wesolowski 				}
3448ab5bb018SKeith M Wesolowski 				if (reg & IA32_ARCH_CAP_TSX_CTRL) {
3449ab5bb018SKeith M Wesolowski 					add_x86_feature(featureset,
3450ab5bb018SKeith M Wesolowski 					    X86FSET_TSX_CTRL);
3451ab5bb018SKeith M Wesolowski 				}
3452ab5bb018SKeith M Wesolowski 				if (reg & IA32_ARCH_CAP_TAA_NO) {
3453ab5bb018SKeith M Wesolowski 					add_x86_feature(featureset,
3454ab5bb018SKeith M Wesolowski 					    X86FSET_TAA_NO);
3455ab5bb018SKeith M Wesolowski 				}
34565cd084edSDan McDonald 				if (reg & IA32_ARCH_CAP_RFDS_NO) {
34575cd084edSDan McDonald 					add_x86_feature(featureset,
34585cd084edSDan McDonald 					    X86FSET_RFDS_NO);
34595cd084edSDan McDonald 				}
34605cd084edSDan McDonald 				if (reg & IA32_ARCH_CAP_RFDS_CLEAR) {
34615cd084edSDan McDonald 					add_x86_feature(featureset,
34625cd084edSDan McDonald 					    X86FSET_RFDS_CLEAR);
34635cd084edSDan McDonald 				}
3464a6e309baSDan McDonald 				if (reg & IA32_ARCH_CAP_PBRSB_NO) {
3465a6e309baSDan McDonald 					add_x86_feature(featureset,
3466a6e309baSDan McDonald 					    X86FSET_PBRSB_NO);
3467a6e309baSDan McDonald 				}
34685a9c36deSDan McDonald 				if (reg & IA32_ARCH_CAP_BHI_NO) {
34695a9c36deSDan McDonald 					add_x86_feature(featureset,
34705a9c36deSDan McDonald 					    X86FSET_BHI_NO);
34715a9c36deSDan McDonald 				}
3472ab5bb018SKeith M Wesolowski 			}
3473ab5bb018SKeith M Wesolowski 			no_trap();
3474ab5bb018SKeith M Wesolowski 		}
3475ab5bb018SKeith M Wesolowski #endif	/* !__xpv */
3476ab5bb018SKeith M Wesolowski 
3477ab5bb018SKeith M Wesolowski 		if (ecp->cp_edx & CPUID_INTC_EDX_7_0_SSBD)
3478ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SSBD);
3479ab5bb018SKeith M Wesolowski 
3480ab5bb018SKeith M Wesolowski 		if (ecp->cp_edx & CPUID_INTC_EDX_7_0_FLUSH_CMD)
3481ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_FLUSH_CMD);
3482ab5bb018SKeith M Wesolowski 	}
3483ab5bb018SKeith M Wesolowski 
3484ab5bb018SKeith M Wesolowski 	/*
3485ab5bb018SKeith M Wesolowski 	 * Take care of certain mitigations on the non-boot CPU. The boot CPU
3486ab5bb018SKeith M Wesolowski 	 * will have already run this function and determined what we need to
3487ab5bb018SKeith M Wesolowski 	 * do. This gives us a hook for per-HW thread mitigations such as
3488ab5bb018SKeith M Wesolowski 	 * enhanced IBRS, or disabling TSX.
3489ab5bb018SKeith M Wesolowski 	 */
3490ab5bb018SKeith M Wesolowski 	if (cpu->cpu_id != 0) {
3491651a12cbSRobert Mustacchi 		switch (x86_spectrev2_mitigation) {
3492651a12cbSRobert Mustacchi 		case X86_SPECTREV2_ENHANCED_IBRS:
3493ab5bb018SKeith M Wesolowski 			cpuid_enable_enhanced_ibrs();
3494651a12cbSRobert Mustacchi 			break;
3495651a12cbSRobert Mustacchi 		case X86_SPECTREV2_AUTO_IBRS:
3496651a12cbSRobert Mustacchi 			cpuid_enable_auto_ibrs();
3497651a12cbSRobert Mustacchi 			break;
3498651a12cbSRobert Mustacchi 		default:
3499651a12cbSRobert Mustacchi 			break;
3500ab5bb018SKeith M Wesolowski 		}
3501ab5bb018SKeith M Wesolowski 
35025a9c36deSDan McDonald 		/* If we're committed to BHI_DIS_S, set it for this core. */
35035a9c36deSDan McDonald 		if (x86_bhi_mitigation == X86_BHI_DIS_S)
35045a9c36deSDan McDonald 			cpuid_enable_bhi_dis_s();
35055a9c36deSDan McDonald 
3506ab5bb018SKeith M Wesolowski 		cpuid_apply_tsx(x86_taa_mitigation, featureset);
3507ab5bb018SKeith M Wesolowski 		return;
3508ab5bb018SKeith M Wesolowski 	}
3509ab5bb018SKeith M Wesolowski 
3510ab5bb018SKeith M Wesolowski 	/*
3511ab5bb018SKeith M Wesolowski 	 * Go through and initialize various security mechanisms that we should
3512ab5bb018SKeith M Wesolowski 	 * only do on a single CPU. This includes Spectre V2, L1TF, MDS, and
3513ab5bb018SKeith M Wesolowski 	 * TAA.
3514ab5bb018SKeith M Wesolowski 	 */
3515ab5bb018SKeith M Wesolowski 
3516ab5bb018SKeith M Wesolowski 	/*
3517ab5bb018SKeith M Wesolowski 	 * By default we've come in with retpolines enabled. Check whether we
35185a9c36deSDan McDonald 	 * should disable them or enable enhanced or automatic IBRS.
35195a9c36deSDan McDonald 	 *
35205a9c36deSDan McDonald 	 * Note, we do not allow the use of AMD optimized retpolines as it was
35215a9c36deSDan McDonald 	 * disclosed by AMD in March 2022 that they were still
35225a9c36deSDan McDonald 	 * vulnerable. Prior to that point, we used them.
3523ab5bb018SKeith M Wesolowski 	 */
3524ab5bb018SKeith M Wesolowski 	if (x86_disable_spectrev2 != 0) {
3525ab5bb018SKeith M Wesolowski 		v2mit = X86_SPECTREV2_DISABLED;
3526651a12cbSRobert Mustacchi 	} else if (is_x86_feature(featureset, X86FSET_AUTO_IBRS)) {
3527651a12cbSRobert Mustacchi 		cpuid_enable_auto_ibrs();
3528651a12cbSRobert Mustacchi 		v2mit = X86_SPECTREV2_AUTO_IBRS;
3529ab5bb018SKeith M Wesolowski 	} else if (is_x86_feature(featureset, X86FSET_IBRS_ALL)) {
3530ab5bb018SKeith M Wesolowski 		cpuid_enable_enhanced_ibrs();
3531ab5bb018SKeith M Wesolowski 		v2mit = X86_SPECTREV2_ENHANCED_IBRS;
3532ab5bb018SKeith M Wesolowski 	} else {
3533ab5bb018SKeith M Wesolowski 		v2mit = X86_SPECTREV2_RETPOLINE;
3534ab5bb018SKeith M Wesolowski 	}
3535ab5bb018SKeith M Wesolowski 
3536ab5bb018SKeith M Wesolowski 	cpuid_patch_retpolines(v2mit);
3537a6e309baSDan McDonald 	cpuid_patch_rsb(v2mit, is_x86_feature(featureset, X86FSET_PBRSB_NO));
35385a9c36deSDan McDonald 	x86_bhi_mitigation = cpuid_learn_and_patch_bhi(v2mit, cpu, featureset);
3539ab5bb018SKeith M Wesolowski 	x86_spectrev2_mitigation = v2mit;
3540ab5bb018SKeith M Wesolowski 	membar_producer();
3541ab5bb018SKeith M Wesolowski 
3542ab5bb018SKeith M Wesolowski 	/*
3543ab5bb018SKeith M Wesolowski 	 * We need to determine what changes are required for mitigating L1TF
3544ab5bb018SKeith M Wesolowski 	 * and MDS. If the CPU suffers from either of them, then SMT exclusion
3545ab5bb018SKeith M Wesolowski 	 * is required.
3546ab5bb018SKeith M Wesolowski 	 *
3547ab5bb018SKeith M Wesolowski 	 * If any of these are present, then we need to flush u-arch state at
3548ab5bb018SKeith M Wesolowski 	 * various points. For MDS, we need to do so whenever we change to a
3549ab5bb018SKeith M Wesolowski 	 * lesser privilege level or we are halting the CPU. For L1TF we need to
3550ab5bb018SKeith M Wesolowski 	 * flush the L1D cache at VM entry. When we have microcode that handles
3551ab5bb018SKeith M Wesolowski 	 * MDS, the L1D flush also clears the other u-arch state that the
3552ab5bb018SKeith M Wesolowski 	 * md_clear does.
3553ab5bb018SKeith M Wesolowski 	 */
3554ab5bb018SKeith M Wesolowski 
3555ab5bb018SKeith M Wesolowski 	/*
3556ab5bb018SKeith M Wesolowski 	 * Update whether or not we need to be taking explicit action against
35575cd084edSDan McDonald 	 * MDS or RFDS.
3558ab5bb018SKeith M Wesolowski 	 */
3559ab5bb018SKeith M Wesolowski 	cpuid_update_md_clear(cpu, featureset);
3560ab5bb018SKeith M Wesolowski 
3561ab5bb018SKeith M Wesolowski 	/*
3562ab5bb018SKeith M Wesolowski 	 * Determine whether SMT exclusion is required and whether or not we
3563ab5bb018SKeith M Wesolowski 	 * need to perform an l1d flush.
3564ab5bb018SKeith M Wesolowski 	 */
3565ab5bb018SKeith M Wesolowski 	cpuid_update_l1d_flush(cpu, featureset);
3566ab5bb018SKeith M Wesolowski 
3567ab5bb018SKeith M Wesolowski 	/*
3568ab5bb018SKeith M Wesolowski 	 * Determine what our mitigation strategy should be for TAA and then
3569ab5bb018SKeith M Wesolowski 	 * also apply TAA mitigations.
3570ab5bb018SKeith M Wesolowski 	 */
3571ab5bb018SKeith M Wesolowski 	cpuid_update_tsx(cpu, featureset);
3572ab5bb018SKeith M Wesolowski 	cpuid_apply_tsx(x86_taa_mitigation, featureset);
3573ab5bb018SKeith M Wesolowski }
3574ab5bb018SKeith M Wesolowski 
3575ab5bb018SKeith M Wesolowski /*
3576ab5bb018SKeith M Wesolowski  * Setup XFeature_Enabled_Mask register. Required by xsave feature.
3577ab5bb018SKeith M Wesolowski  */
3578ab5bb018SKeith M Wesolowski void
setup_xfem(void)3579ab5bb018SKeith M Wesolowski setup_xfem(void)
3580ab5bb018SKeith M Wesolowski {
3581ab5bb018SKeith M Wesolowski 	uint64_t flags = XFEATURE_LEGACY_FP;
3582ab5bb018SKeith M Wesolowski 
3583ab5bb018SKeith M Wesolowski 	ASSERT(is_x86_feature(x86_featureset, X86FSET_XSAVE));
3584ab5bb018SKeith M Wesolowski 
3585ab5bb018SKeith M Wesolowski 	if (is_x86_feature(x86_featureset, X86FSET_SSE))
3586ab5bb018SKeith M Wesolowski 		flags |= XFEATURE_SSE;
3587ab5bb018SKeith M Wesolowski 
3588ab5bb018SKeith M Wesolowski 	if (is_x86_feature(x86_featureset, X86FSET_AVX))
3589ab5bb018SKeith M Wesolowski 		flags |= XFEATURE_AVX;
3590ab5bb018SKeith M Wesolowski 
3591ab5bb018SKeith M Wesolowski 	if (is_x86_feature(x86_featureset, X86FSET_AVX512F))
3592ab5bb018SKeith M Wesolowski 		flags |= XFEATURE_AVX512;
3593ab5bb018SKeith M Wesolowski 
3594ab5bb018SKeith M Wesolowski 	set_xcr(XFEATURE_ENABLED_MASK, flags);
3595ab5bb018SKeith M Wesolowski 
3596ab5bb018SKeith M Wesolowski 	xsave_bv_all = flags;
3597ab5bb018SKeith M Wesolowski }
3598ab5bb018SKeith M Wesolowski 
3599ab5bb018SKeith M Wesolowski static void
cpuid_basic_topology(cpu_t * cpu,uchar_t * featureset)3600ab5bb018SKeith M Wesolowski cpuid_basic_topology(cpu_t *cpu, uchar_t *featureset)
3601ab5bb018SKeith M Wesolowski {
3602ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
3603ab5bb018SKeith M Wesolowski 
3604ab5bb018SKeith M Wesolowski 	cpi = cpu->cpu_m.mcpu_cpi;
3605ab5bb018SKeith M Wesolowski 
3606ab5bb018SKeith M Wesolowski 	if (cpi->cpi_vendor == X86_VENDOR_AMD ||
3607ab5bb018SKeith M Wesolowski 	    cpi->cpi_vendor == X86_VENDOR_HYGON) {
3608ab5bb018SKeith M Wesolowski 		cpuid_gather_amd_topology_leaves(cpu);
3609ab5bb018SKeith M Wesolowski 	}
3610ab5bb018SKeith M Wesolowski 
3611ab5bb018SKeith M Wesolowski 	cpi->cpi_apicid = cpuid_gather_apicid(cpi);
3612ab5bb018SKeith M Wesolowski 
3613ab5bb018SKeith M Wesolowski 	/*
3614ab5bb018SKeith M Wesolowski 	 * Before we can calculate the IDs that we should assign to this
3615ab5bb018SKeith M Wesolowski 	 * processor, we need to understand how many cores and threads it has.
3616ab5bb018SKeith M Wesolowski 	 */
3617ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
3618ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
3619ab5bb018SKeith M Wesolowski 		cpuid_intel_ncores(cpi, &cpi->cpi_ncpu_per_chip,
3620ab5bb018SKeith M Wesolowski 		    &cpi->cpi_ncore_per_chip);
3621ab5bb018SKeith M Wesolowski 		break;
3622ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
3623ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
3624ab5bb018SKeith M Wesolowski 		cpuid_amd_ncores(cpi, &cpi->cpi_ncpu_per_chip,
3625ab5bb018SKeith M Wesolowski 		    &cpi->cpi_ncore_per_chip);
3626ab5bb018SKeith M Wesolowski 		break;
3627ab5bb018SKeith M Wesolowski 	default:
3628ab5bb018SKeith M Wesolowski 		/*
3629ab5bb018SKeith M Wesolowski 		 * If we have some other x86 compatible chip, it's not clear how
3630ab5bb018SKeith M Wesolowski 		 * they would behave. The most common case is virtualization
3631ab5bb018SKeith M Wesolowski 		 * today, though there are also 64-bit VIA chips. Assume that
3632ab5bb018SKeith M Wesolowski 		 * all we can get is the basic Leaf 1 HTT information.
3633ab5bb018SKeith M Wesolowski 		 */
3634ab5bb018SKeith M Wesolowski 		if ((cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_HTT) != 0) {
3635ab5bb018SKeith M Wesolowski 			cpi->cpi_ncore_per_chip = 1;
3636ab5bb018SKeith M Wesolowski 			cpi->cpi_ncpu_per_chip = CPI_CPU_COUNT(cpi);
3637ab5bb018SKeith M Wesolowski 		}
3638ab5bb018SKeith M Wesolowski 		break;
3639ab5bb018SKeith M Wesolowski 	}
3640ab5bb018SKeith M Wesolowski 
3641ab5bb018SKeith M Wesolowski 	/*
3642ab5bb018SKeith M Wesolowski 	 * Based on the calculated number of threads and cores, potentially
3643ab5bb018SKeith M Wesolowski 	 * assign the HTT and CMT features.
3644ab5bb018SKeith M Wesolowski 	 */
3645ab5bb018SKeith M Wesolowski 	if (cpi->cpi_ncore_per_chip > 1) {
3646ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_CMP);
3647ab5bb018SKeith M Wesolowski 	}
3648ab5bb018SKeith M Wesolowski 
3649ab5bb018SKeith M Wesolowski 	if (cpi->cpi_ncpu_per_chip > 1 &&
3650ab5bb018SKeith M Wesolowski 	    cpi->cpi_ncpu_per_chip != cpi->cpi_ncore_per_chip) {
3651ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_HTT);
3652ab5bb018SKeith M Wesolowski 	}
3653ab5bb018SKeith M Wesolowski 
3654ab5bb018SKeith M Wesolowski 	/*
3655ab5bb018SKeith M Wesolowski 	 * Now that has been set up, we need to go through and calculate all of
3656ab5bb018SKeith M Wesolowski 	 * the rest of the parameters that exist. If we think the CPU doesn't
3657ab5bb018SKeith M Wesolowski 	 * have either SMT (HTT) or CMP, then we basically go through and fake
3658ab5bb018SKeith M Wesolowski 	 * up information in some way. The most likely case for this is
3659ab5bb018SKeith M Wesolowski 	 * virtualization where we have a lot of partial topology information.
3660ab5bb018SKeith M Wesolowski 	 */
3661ab5bb018SKeith M Wesolowski 	if (!is_x86_feature(featureset, X86FSET_HTT) &&
3662ab5bb018SKeith M Wesolowski 	    !is_x86_feature(featureset, X86FSET_CMP)) {
3663ab5bb018SKeith M Wesolowski 		/*
3664ab5bb018SKeith M Wesolowski 		 * This is a single core, single-threaded processor.
3665ab5bb018SKeith M Wesolowski 		 */
3666ab5bb018SKeith M Wesolowski 		cpi->cpi_procnodes_per_pkg = 1;
3667ab5bb018SKeith M Wesolowski 		cpi->cpi_cores_per_compunit = 1;
3668ab5bb018SKeith M Wesolowski 		cpi->cpi_compunitid = 0;
3669ab5bb018SKeith M Wesolowski 		cpi->cpi_chipid = -1;
3670ab5bb018SKeith M Wesolowski 		cpi->cpi_clogid = 0;
3671ab5bb018SKeith M Wesolowski 		cpi->cpi_coreid = cpu->cpu_id;
3672ab5bb018SKeith M Wesolowski 		cpi->cpi_pkgcoreid = 0;
3673ab5bb018SKeith M Wesolowski 		if (cpi->cpi_vendor == X86_VENDOR_AMD ||
3674ab5bb018SKeith M Wesolowski 		    cpi->cpi_vendor == X86_VENDOR_HYGON) {
3675ab5bb018SKeith M Wesolowski 			cpi->cpi_procnodeid = BITX(cpi->cpi_apicid, 3, 0);
3676ab5bb018SKeith M Wesolowski 		} else {
3677ab5bb018SKeith M Wesolowski 			cpi->cpi_procnodeid = cpi->cpi_chipid;
3678ab5bb018SKeith M Wesolowski 		}
3679ab5bb018SKeith M Wesolowski 	} else {
3680ab5bb018SKeith M Wesolowski 		switch (cpi->cpi_vendor) {
3681ab5bb018SKeith M Wesolowski 		case X86_VENDOR_Intel:
3682ab5bb018SKeith M Wesolowski 			cpuid_intel_getids(cpu, featureset);
3683ab5bb018SKeith M Wesolowski 			break;
3684ab5bb018SKeith M Wesolowski 		case X86_VENDOR_AMD:
3685ab5bb018SKeith M Wesolowski 		case X86_VENDOR_HYGON:
3686ab5bb018SKeith M Wesolowski 			cpuid_amd_getids(cpu, featureset);
3687ab5bb018SKeith M Wesolowski 			break;
3688ab5bb018SKeith M Wesolowski 		default:
3689ab5bb018SKeith M Wesolowski 			/*
3690ab5bb018SKeith M Wesolowski 			 * In this case, it's hard to say what we should do.
3691ab5bb018SKeith M Wesolowski 			 * We're going to model them to the OS as single core
3692ab5bb018SKeith M Wesolowski 			 * threads. We don't have a good identifier for them, so
3693ab5bb018SKeith M Wesolowski 			 * we're just going to use the cpu id all on a single
3694ab5bb018SKeith M Wesolowski 			 * chip.
3695ab5bb018SKeith M Wesolowski 			 *
3696ab5bb018SKeith M Wesolowski 			 * This case has historically been different from the
3697ab5bb018SKeith M Wesolowski 			 * case above where we don't have HTT or CMP. While they
3698ab5bb018SKeith M Wesolowski 			 * could be combined, we've opted to keep it separate to
3699ab5bb018SKeith M Wesolowski 			 * minimize the risk of topology changes in weird cases.
3700ab5bb018SKeith M Wesolowski 			 */
3701ab5bb018SKeith M Wesolowski 			cpi->cpi_procnodes_per_pkg = 1;
3702ab5bb018SKeith M Wesolowski 			cpi->cpi_cores_per_compunit = 1;
3703ab5bb018SKeith M Wesolowski 			cpi->cpi_chipid = 0;
3704ab5bb018SKeith M Wesolowski 			cpi->cpi_coreid = cpu->cpu_id;
3705ab5bb018SKeith M Wesolowski 			cpi->cpi_clogid = cpu->cpu_id;
3706ab5bb018SKeith M Wesolowski 			cpi->cpi_pkgcoreid = cpu->cpu_id;
3707ab5bb018SKeith M Wesolowski 			cpi->cpi_procnodeid = cpi->cpi_chipid;
3708ab5bb018SKeith M Wesolowski 			cpi->cpi_compunitid = cpi->cpi_coreid;
3709ab5bb018SKeith M Wesolowski 			break;
3710ab5bb018SKeith M Wesolowski 		}
3711ab5bb018SKeith M Wesolowski 	}
3712ab5bb018SKeith M Wesolowski }
3713ab5bb018SKeith M Wesolowski 
3714ab5bb018SKeith M Wesolowski /*
3715ab5bb018SKeith M Wesolowski  * Gather relevant CPU features from leaf 6 which covers thermal information. We
3716ab5bb018SKeith M Wesolowski  * always gather leaf 6 if it's supported; however, we only look for features on
3717ab5bb018SKeith M Wesolowski  * Intel systems as AMD does not currently define any of the features we look
3718ab5bb018SKeith M Wesolowski  * for below.
3719ab5bb018SKeith M Wesolowski  */
3720ab5bb018SKeith M Wesolowski static void
cpuid_basic_thermal(cpu_t * cpu,uchar_t * featureset)3721ab5bb018SKeith M Wesolowski cpuid_basic_thermal(cpu_t *cpu, uchar_t *featureset)
3722ab5bb018SKeith M Wesolowski {
3723ab5bb018SKeith M Wesolowski 	struct cpuid_regs *cp;
3724ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
3725ab5bb018SKeith M Wesolowski 
3726ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax < 6) {
3727ab5bb018SKeith M Wesolowski 		return;
3728ab5bb018SKeith M Wesolowski 	}
3729ab5bb018SKeith M Wesolowski 
3730ab5bb018SKeith M Wesolowski 	cp = &cpi->cpi_std[6];
3731ab5bb018SKeith M Wesolowski 	cp->cp_eax = 6;
3732ab5bb018SKeith M Wesolowski 	cp->cp_ebx = cp->cp_ecx = cp->cp_edx = 0;
3733ab5bb018SKeith M Wesolowski 	(void) __cpuid_insn(cp);
3734ab5bb018SKeith M Wesolowski 	platform_cpuid_mangle(cpi->cpi_vendor, 6, cp);
3735ab5bb018SKeith M Wesolowski 
3736ab5bb018SKeith M Wesolowski 	if (cpi->cpi_vendor != X86_VENDOR_Intel) {
3737ab5bb018SKeith M Wesolowski 		return;
3738ab5bb018SKeith M Wesolowski 	}
3739ab5bb018SKeith M Wesolowski 
3740ab5bb018SKeith M Wesolowski 	if ((cp->cp_eax & CPUID_INTC_EAX_DTS) != 0) {
3741ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_CORE_THERMAL);
3742ab5bb018SKeith M Wesolowski 	}
3743ab5bb018SKeith M Wesolowski 
3744ab5bb018SKeith M Wesolowski 	if ((cp->cp_eax & CPUID_INTC_EAX_PTM) != 0) {
3745ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_PKG_THERMAL);
3746ab5bb018SKeith M Wesolowski 	}
3747ab5bb018SKeith M Wesolowski }
3748ab5bb018SKeith M Wesolowski 
3749ab5bb018SKeith M Wesolowski /*
375056726c7eSRobert Mustacchi  * This is used when we discover that we have AVX support in cpuid. This
375156726c7eSRobert Mustacchi  * proceeds to scan for the rest of the AVX derived features.
375256726c7eSRobert Mustacchi  */
375356726c7eSRobert Mustacchi static void
cpuid_basic_avx(cpu_t * cpu,uchar_t * featureset)375456726c7eSRobert Mustacchi cpuid_basic_avx(cpu_t *cpu, uchar_t *featureset)
375556726c7eSRobert Mustacchi {
375656726c7eSRobert Mustacchi 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
375756726c7eSRobert Mustacchi 
375856726c7eSRobert Mustacchi 	/*
375956726c7eSRobert Mustacchi 	 * If we don't have AVX, don't bother with most of this.
376056726c7eSRobert Mustacchi 	 */
376156726c7eSRobert Mustacchi 	if ((cpi->cpi_std[1].cp_ecx & CPUID_INTC_ECX_AVX) == 0)
376256726c7eSRobert Mustacchi 		return;
376356726c7eSRobert Mustacchi 
376456726c7eSRobert Mustacchi 	add_x86_feature(featureset, X86FSET_AVX);
376556726c7eSRobert Mustacchi 
376656726c7eSRobert Mustacchi 	/*
376756726c7eSRobert Mustacchi 	 * Intel says we can't check these without also
376856726c7eSRobert Mustacchi 	 * checking AVX.
376956726c7eSRobert Mustacchi 	 */
377056726c7eSRobert Mustacchi 	if (cpi->cpi_std[1].cp_ecx & CPUID_INTC_ECX_F16C)
377156726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_F16C);
377256726c7eSRobert Mustacchi 
377356726c7eSRobert Mustacchi 	if (cpi->cpi_std[1].cp_ecx & CPUID_INTC_ECX_FMA)
377456726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_FMA);
377556726c7eSRobert Mustacchi 
377656726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_BMI1)
377756726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_BMI1);
377856726c7eSRobert Mustacchi 
377956726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_BMI2)
378056726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_BMI2);
378156726c7eSRobert Mustacchi 
378256726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX2)
378356726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX2);
378456726c7eSRobert Mustacchi 
378556726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_VAES)
378656726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_VAES);
378756726c7eSRobert Mustacchi 
378856726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_VPCLMULQDQ)
378956726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_VPCLMULQDQ);
379056726c7eSRobert Mustacchi 
379156726c7eSRobert Mustacchi 	/*
379256726c7eSRobert Mustacchi 	 * The rest of the AVX features require AVX512. Do not check them unless
379356726c7eSRobert Mustacchi 	 * it is present.
379456726c7eSRobert Mustacchi 	 */
379556726c7eSRobert Mustacchi 	if ((cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512F) == 0)
379656726c7eSRobert Mustacchi 		return;
379756726c7eSRobert Mustacchi 	add_x86_feature(featureset, X86FSET_AVX512F);
379856726c7eSRobert Mustacchi 
379956726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512DQ)
380056726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512DQ);
380156726c7eSRobert Mustacchi 
380256726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512IFMA)
380356726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512FMA);
380456726c7eSRobert Mustacchi 
380556726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512PF)
380656726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512PF);
380756726c7eSRobert Mustacchi 
380856726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512ER)
380956726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512ER);
381056726c7eSRobert Mustacchi 
381156726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512CD)
381256726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512CD);
381356726c7eSRobert Mustacchi 
381456726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512BW)
381556726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512BW);
381656726c7eSRobert Mustacchi 
381756726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_AVX512VL)
381856726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512VL);
381956726c7eSRobert Mustacchi 
382056726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VBMI)
382156726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512VBMI);
382256726c7eSRobert Mustacchi 
382356726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VBMI2)
382456726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512_VBMI2);
382556726c7eSRobert Mustacchi 
382656726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VNNI)
382756726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512VNNI);
382856726c7eSRobert Mustacchi 
382956726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512BITALG)
383056726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512_BITALG);
383156726c7eSRobert Mustacchi 
383256726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_ecx & CPUID_INTC_ECX_7_0_AVX512VPOPCDQ)
383356726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512VPOPCDQ);
383456726c7eSRobert Mustacchi 
383556726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_edx & CPUID_INTC_EDX_7_0_AVX5124NNIW)
383656726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512NNIW);
383756726c7eSRobert Mustacchi 
383856726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_edx & CPUID_INTC_EDX_7_0_AVX5124FMAPS)
383956726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512FMAPS);
384056726c7eSRobert Mustacchi 
384156726c7eSRobert Mustacchi 	/*
384256726c7eSRobert Mustacchi 	 * More features here are in Leaf 7, subleaf 1. Don't bother checking if
384356726c7eSRobert Mustacchi 	 * we don't need to.
384456726c7eSRobert Mustacchi 	 */
384556726c7eSRobert Mustacchi 	if (cpi->cpi_std[7].cp_eax < 1)
384656726c7eSRobert Mustacchi 		return;
384756726c7eSRobert Mustacchi 
384856726c7eSRobert Mustacchi 	if (cpi->cpi_sub7[0].cp_eax & CPUID_INTC_EAX_7_1_AVX512_BF16)
384956726c7eSRobert Mustacchi 		add_x86_feature(featureset, X86FSET_AVX512_BF16);
385056726c7eSRobert Mustacchi }
385156726c7eSRobert Mustacchi 
385256726c7eSRobert Mustacchi /*
3853ab5bb018SKeith M Wesolowski  * PPIN is the protected processor inventory number. On AMD this is an actual
3854ab5bb018SKeith M Wesolowski  * feature bit. However, on Intel systems we need to read the platform
3855ab5bb018SKeith M Wesolowski  * information MSR if we're on a specific model.
3856ab5bb018SKeith M Wesolowski  */
3857ab5bb018SKeith M Wesolowski #if !defined(__xpv)
3858ab5bb018SKeith M Wesolowski static void
cpuid_basic_ppin(cpu_t * cpu,uchar_t * featureset)3859ab5bb018SKeith M Wesolowski cpuid_basic_ppin(cpu_t *cpu, uchar_t *featureset)
3860ab5bb018SKeith M Wesolowski {
3861ab5bb018SKeith M Wesolowski 	on_trap_data_t otd;
3862ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
3863ab5bb018SKeith M Wesolowski 
3864ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
3865ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
3866ab5bb018SKeith M Wesolowski 		/*
3867ab5bb018SKeith M Wesolowski 		 * This leaf will have already been gathered in the topology
3868ab5bb018SKeith M Wesolowski 		 * functions.
3869ab5bb018SKeith M Wesolowski 		 */
3870ab5bb018SKeith M Wesolowski 		if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8) {
3871ab5bb018SKeith M Wesolowski 			if (cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_PPIN) {
3872ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_PPIN);
3873ab5bb018SKeith M Wesolowski 			}
3874ab5bb018SKeith M Wesolowski 		}
3875ab5bb018SKeith M Wesolowski 		break;
3876ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
3877ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family != 6)
3878ab5bb018SKeith M Wesolowski 			break;
3879ab5bb018SKeith M Wesolowski 		switch (cpi->cpi_model) {
3880ab5bb018SKeith M Wesolowski 		case INTC_MODEL_IVYBRIDGE_XEON:
3881ab5bb018SKeith M Wesolowski 		case INTC_MODEL_HASWELL_XEON:
3882ab5bb018SKeith M Wesolowski 		case INTC_MODEL_BROADWELL_XEON:
3883ab5bb018SKeith M Wesolowski 		case INTC_MODEL_BROADWELL_XEON_D:
3884ab5bb018SKeith M Wesolowski 		case INTC_MODEL_SKYLAKE_XEON:
3885ab5bb018SKeith M Wesolowski 		case INTC_MODEL_ICELAKE_XEON:
3886ab5bb018SKeith M Wesolowski 			if (!on_trap(&otd, OT_DATA_ACCESS)) {
3887ab5bb018SKeith M Wesolowski 				uint64_t value;
3888ab5bb018SKeith M Wesolowski 
3889ab5bb018SKeith M Wesolowski 				value = rdmsr(MSR_PLATFORM_INFO);
3890ab5bb018SKeith M Wesolowski 				if ((value & MSR_PLATFORM_INFO_PPIN) != 0) {
3891ab5bb018SKeith M Wesolowski 					add_x86_feature(featureset,
3892ab5bb018SKeith M Wesolowski 					    X86FSET_PPIN);
3893ab5bb018SKeith M Wesolowski 				}
3894ab5bb018SKeith M Wesolowski 			}
3895ab5bb018SKeith M Wesolowski 			no_trap();
3896ab5bb018SKeith M Wesolowski 			break;
3897ab5bb018SKeith M Wesolowski 		default:
3898ab5bb018SKeith M Wesolowski 			break;
3899ab5bb018SKeith M Wesolowski 		}
3900ab5bb018SKeith M Wesolowski 		break;
3901ab5bb018SKeith M Wesolowski 	default:
3902ab5bb018SKeith M Wesolowski 		break;
3903ab5bb018SKeith M Wesolowski 	}
3904ab5bb018SKeith M Wesolowski }
3905ab5bb018SKeith M Wesolowski #endif	/* ! __xpv */
3906ab5bb018SKeith M Wesolowski 
3907ab5bb018SKeith M Wesolowski static void
cpuid_pass_prelude(cpu_t * cpu,void * arg)3908ab5bb018SKeith M Wesolowski cpuid_pass_prelude(cpu_t *cpu, void *arg)
3909ab5bb018SKeith M Wesolowski {
3910ab5bb018SKeith M Wesolowski 	uchar_t *featureset = (uchar_t *)arg;
3911ab5bb018SKeith M Wesolowski 
3912ab5bb018SKeith M Wesolowski 	/*
3913ab5bb018SKeith M Wesolowski 	 * We don't run on any processor that doesn't have cpuid, and could not
3914ab5bb018SKeith M Wesolowski 	 * possibly have arrived here.
3915ab5bb018SKeith M Wesolowski 	 */
3916ab5bb018SKeith M Wesolowski 	add_x86_feature(featureset, X86FSET_CPUID);
3917ab5bb018SKeith M Wesolowski }
3918ab5bb018SKeith M Wesolowski 
3919ab5bb018SKeith M Wesolowski static void
cpuid_pass_ident(cpu_t * cpu,void * arg __unused)3920ab5bb018SKeith M Wesolowski cpuid_pass_ident(cpu_t *cpu, void *arg __unused)
3921ab5bb018SKeith M Wesolowski {
3922ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
3923ab5bb018SKeith M Wesolowski 	struct cpuid_regs *cp;
3924ab5bb018SKeith M Wesolowski 
3925ab5bb018SKeith M Wesolowski 	/*
3926ab5bb018SKeith M Wesolowski 	 * We require that virtual/native detection be complete and that PCI
3927ab5bb018SKeith M Wesolowski 	 * config space access has been set up; at present there is no reliable
3928ab5bb018SKeith M Wesolowski 	 * way to determine the latter.
3929ab5bb018SKeith M Wesolowski 	 */
393022e4c3acSKeith M Wesolowski #if !defined(__xpv)
3931ab5bb018SKeith M Wesolowski 	ASSERT3S(platform_type, !=, -1);
393222e4c3acSKeith M Wesolowski #endif	/* !__xpv */
3933ab5bb018SKeith M Wesolowski 
3934ab5bb018SKeith M Wesolowski 	cpi = cpu->cpu_m.mcpu_cpi;
3935ab5bb018SKeith M Wesolowski 	ASSERT(cpi != NULL);
3936ab5bb018SKeith M Wesolowski 
3937ab5bb018SKeith M Wesolowski 	cp = &cpi->cpi_std[0];
3938ab5bb018SKeith M Wesolowski 	cp->cp_eax = 0;
3939ab5bb018SKeith M Wesolowski 	cpi->cpi_maxeax = __cpuid_insn(cp);
3940ab5bb018SKeith M Wesolowski 	{
3941ab5bb018SKeith M Wesolowski 		uint32_t *iptr = (uint32_t *)cpi->cpi_vendorstr;
3942ab5bb018SKeith M Wesolowski 		*iptr++ = cp->cp_ebx;
3943ab5bb018SKeith M Wesolowski 		*iptr++ = cp->cp_edx;
3944ab5bb018SKeith M Wesolowski 		*iptr++ = cp->cp_ecx;
3945ab5bb018SKeith M Wesolowski 		*(char *)&cpi->cpi_vendorstr[12] = '\0';
3946ab5bb018SKeith M Wesolowski 	}
3947ab5bb018SKeith M Wesolowski 
3948ab5bb018SKeith M Wesolowski 	cpi->cpi_vendor = _cpuid_vendorstr_to_vendorcode(cpi->cpi_vendorstr);
3949ab5bb018SKeith M Wesolowski 	x86_vendor = cpi->cpi_vendor; /* for compatibility */
3950ab5bb018SKeith M Wesolowski 
3951ab5bb018SKeith M Wesolowski 	/*
3952ab5bb018SKeith M Wesolowski 	 * Limit the range in case of weird hardware
3953ab5bb018SKeith M Wesolowski 	 */
3954ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax > CPI_MAXEAX_MAX)
3955ab5bb018SKeith M Wesolowski 		cpi->cpi_maxeax = CPI_MAXEAX_MAX;
3956ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax < 1)
3957ab5bb018SKeith M Wesolowski 		return;
3958ab5bb018SKeith M Wesolowski 
3959ab5bb018SKeith M Wesolowski 	cp = &cpi->cpi_std[1];
3960ab5bb018SKeith M Wesolowski 	cp->cp_eax = 1;
3961ab5bb018SKeith M Wesolowski 	(void) __cpuid_insn(cp);
3962ab5bb018SKeith M Wesolowski 
3963ab5bb018SKeith M Wesolowski 	/*
3964ab5bb018SKeith M Wesolowski 	 * Extract identifying constants for easy access.
3965ab5bb018SKeith M Wesolowski 	 */
3966ab5bb018SKeith M Wesolowski 	cpi->cpi_model = CPI_MODEL(cpi);
3967ab5bb018SKeith M Wesolowski 	cpi->cpi_family = CPI_FAMILY(cpi);
3968ab5bb018SKeith M Wesolowski 
3969ab5bb018SKeith M Wesolowski 	if (cpi->cpi_family == 0xf)
3970ab5bb018SKeith M Wesolowski 		cpi->cpi_family += CPI_FAMILY_XTD(cpi);
3971ab5bb018SKeith M Wesolowski 
3972ab5bb018SKeith M Wesolowski 	/*
3973ab5bb018SKeith M Wesolowski 	 * Beware: AMD uses "extended model" iff base *FAMILY* == 0xf.
3974ab5bb018SKeith M Wesolowski 	 * Intel, and presumably everyone else, uses model == 0xf, as
3975ab5bb018SKeith M Wesolowski 	 * one would expect (max value means possible overflow).  Sigh.
3976ab5bb018SKeith M Wesolowski 	 */
3977ab5bb018SKeith M Wesolowski 
3978ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
3979ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
3980ab5bb018SKeith M Wesolowski 		if (IS_EXTENDED_MODEL_INTEL(cpi))
3981ab5bb018SKeith M Wesolowski 			cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4;
3982ab5bb018SKeith M Wesolowski 		break;
3983ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
3984ab5bb018SKeith M Wesolowski 		if (CPI_FAMILY(cpi) == 0xf)
3985ab5bb018SKeith M Wesolowski 			cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4;
3986ab5bb018SKeith M Wesolowski 		break;
3987ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
3988ab5bb018SKeith M Wesolowski 		cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4;
3989ab5bb018SKeith M Wesolowski 		break;
3990ab5bb018SKeith M Wesolowski 	default:
3991ab5bb018SKeith M Wesolowski 		if (cpi->cpi_model == 0xf)
3992ab5bb018SKeith M Wesolowski 			cpi->cpi_model += CPI_MODEL_XTD(cpi) << 4;
3993ab5bb018SKeith M Wesolowski 		break;
3994ab5bb018SKeith M Wesolowski 	}
3995ab5bb018SKeith M Wesolowski 
3996ab5bb018SKeith M Wesolowski 	cpi->cpi_step = CPI_STEP(cpi);
3997ab5bb018SKeith M Wesolowski 	cpi->cpi_brandid = CPI_BRANDID(cpi);
3998ab5bb018SKeith M Wesolowski 
3999ab5bb018SKeith M Wesolowski 	/*
4000ab5bb018SKeith M Wesolowski 	 * Synthesize chip "revision" and socket type
4001ab5bb018SKeith M Wesolowski 	 */
4002ab5bb018SKeith M Wesolowski 	cpi->cpi_chiprev = _cpuid_chiprev(cpi->cpi_vendor, cpi->cpi_family,
4003ab5bb018SKeith M Wesolowski 	    cpi->cpi_model, cpi->cpi_step);
4004ab5bb018SKeith M Wesolowski 	cpi->cpi_chiprevstr = _cpuid_chiprevstr(cpi->cpi_vendor,
4005ab5bb018SKeith M Wesolowski 	    cpi->cpi_family, cpi->cpi_model, cpi->cpi_step);
4006ab5bb018SKeith M Wesolowski 	cpi->cpi_socket = _cpuid_skt(cpi->cpi_vendor, cpi->cpi_family,
4007ab5bb018SKeith M Wesolowski 	    cpi->cpi_model, cpi->cpi_step);
400822e4c3acSKeith M Wesolowski 	cpi->cpi_uarchrev = _cpuid_uarchrev(cpi->cpi_vendor, cpi->cpi_family,
400922e4c3acSKeith M Wesolowski 	    cpi->cpi_model, cpi->cpi_step);
4010ab5bb018SKeith M Wesolowski }
4011ab5bb018SKeith M Wesolowski 
4012ab5bb018SKeith M Wesolowski static void
cpuid_pass_basic(cpu_t * cpu,void * arg)4013ab5bb018SKeith M Wesolowski cpuid_pass_basic(cpu_t *cpu, void *arg)
4014ab5bb018SKeith M Wesolowski {
4015ab5bb018SKeith M Wesolowski 	uchar_t *featureset = (uchar_t *)arg;
4016ab5bb018SKeith M Wesolowski 	uint32_t mask_ecx, mask_edx;
4017ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
4018ab5bb018SKeith M Wesolowski 	struct cpuid_regs *cp;
4019ab5bb018SKeith M Wesolowski 	int xcpuid;
4020ab5bb018SKeith M Wesolowski #if !defined(__xpv)
4021ab5bb018SKeith M Wesolowski 	extern int idle_cpu_prefer_mwait;
4022ab5bb018SKeith M Wesolowski #endif
4023ab5bb018SKeith M Wesolowski 
4024ab5bb018SKeith M Wesolowski 	cpi = cpu->cpu_m.mcpu_cpi;
4025ab5bb018SKeith M Wesolowski 	ASSERT(cpi != NULL);
4026ab5bb018SKeith M Wesolowski 
4027ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax < 1)
4028ab5bb018SKeith M Wesolowski 		return;
4029ab5bb018SKeith M Wesolowski 
4030ab5bb018SKeith M Wesolowski 	/*
4031ab5bb018SKeith M Wesolowski 	 * This was filled during the identification pass.
4032ab5bb018SKeith M Wesolowski 	 */
4033ab5bb018SKeith M Wesolowski 	cp = &cpi->cpi_std[1];
4034ab5bb018SKeith M Wesolowski 
4035ab5bb018SKeith M Wesolowski 	/*
4036ab5bb018SKeith M Wesolowski 	 * *default* assumptions:
4037ab5bb018SKeith M Wesolowski 	 * - believe %edx feature word
4038ab5bb018SKeith M Wesolowski 	 * - ignore %ecx feature word
4039ab5bb018SKeith M Wesolowski 	 * - 32-bit virtual and physical addressing
4040ab5bb018SKeith M Wesolowski 	 */
4041ab5bb018SKeith M Wesolowski 	mask_edx = 0xffffffff;
4042ab5bb018SKeith M Wesolowski 	mask_ecx = 0;
4043ab5bb018SKeith M Wesolowski 
4044ab5bb018SKeith M Wesolowski 	cpi->cpi_pabits = cpi->cpi_vabits = 32;
4045ab5bb018SKeith M Wesolowski 
4046ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
4047ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
4048ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 5)
4049ab5bb018SKeith M Wesolowski 			x86_type = X86_TYPE_P5;
4050ab5bb018SKeith M Wesolowski 		else if (IS_LEGACY_P6(cpi)) {
4051ab5bb018SKeith M Wesolowski 			x86_type = X86_TYPE_P6;
4052ab5bb018SKeith M Wesolowski 			pentiumpro_bug4046376 = 1;
4053ab5bb018SKeith M Wesolowski 			/*
4054ab5bb018SKeith M Wesolowski 			 * Clear the SEP bit when it was set erroneously
4055ab5bb018SKeith M Wesolowski 			 */
4056ab5bb018SKeith M Wesolowski 			if (cpi->cpi_model < 3 && cpi->cpi_step < 3)
4057ab5bb018SKeith M Wesolowski 				cp->cp_edx &= ~CPUID_INTC_EDX_SEP;
4058ab5bb018SKeith M Wesolowski 		} else if (IS_NEW_F6(cpi) || cpi->cpi_family == 0xf) {
4059ab5bb018SKeith M Wesolowski 			x86_type = X86_TYPE_P4;
4060ab5bb018SKeith M Wesolowski 			/*
4061ab5bb018SKeith M Wesolowski 			 * We don't currently depend on any of the %ecx
4062ab5bb018SKeith M Wesolowski 			 * features until Prescott, so we'll only check
4063ab5bb018SKeith M Wesolowski 			 * this from P4 onwards.  We might want to revisit
4064ab5bb018SKeith M Wesolowski 			 * that idea later.
4065ab5bb018SKeith M Wesolowski 			 */
4066ab5bb018SKeith M Wesolowski 			mask_ecx = 0xffffffff;
4067ab5bb018SKeith M Wesolowski 		} else if (cpi->cpi_family > 0xf)
4068ab5bb018SKeith M Wesolowski 			mask_ecx = 0xffffffff;
4069ab5bb018SKeith M Wesolowski 		/*
4070ab5bb018SKeith M Wesolowski 		 * We don't support MONITOR/MWAIT if leaf 5 is not available
4071ab5bb018SKeith M Wesolowski 		 * to obtain the monitor linesize.
4072ab5bb018SKeith M Wesolowski 		 */
4073ab5bb018SKeith M Wesolowski 		if (cpi->cpi_maxeax < 5)
4074ab5bb018SKeith M Wesolowski 			mask_ecx &= ~CPUID_INTC_ECX_MON;
4075ab5bb018SKeith M Wesolowski 		break;
4076ab5bb018SKeith M Wesolowski 	case X86_VENDOR_IntelClone:
4077ab5bb018SKeith M Wesolowski 	default:
4078ab5bb018SKeith M Wesolowski 		break;
4079ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
4080ab5bb018SKeith M Wesolowski #if defined(OPTERON_ERRATUM_108)
4081ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 0xf && cpi->cpi_model == 0xe) {
4082ab5bb018SKeith M Wesolowski 			cp->cp_eax = (0xf0f & cp->cp_eax) | 0xc0;
4083ab5bb018SKeith M Wesolowski 			cpi->cpi_model = 0xc;
4084ab5bb018SKeith M Wesolowski 		} else
4085ab5bb018SKeith M Wesolowski #endif
4086ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 5) {
4087ab5bb018SKeith M Wesolowski 			/*
4088ab5bb018SKeith M Wesolowski 			 * AMD K5 and K6
4089ab5bb018SKeith M Wesolowski 			 *
4090ab5bb018SKeith M Wesolowski 			 * These CPUs have an incomplete implementation
4091ab5bb018SKeith M Wesolowski 			 * of MCA/MCE which we mask away.
4092ab5bb018SKeith M Wesolowski 			 */
4093ab5bb018SKeith M Wesolowski 			mask_edx &= ~(CPUID_INTC_EDX_MCE | CPUID_INTC_EDX_MCA);
4094ab5bb018SKeith M Wesolowski 
4095ab5bb018SKeith M Wesolowski 			/*
4096ab5bb018SKeith M Wesolowski 			 * Model 0 uses the wrong (APIC) bit
4097ab5bb018SKeith M Wesolowski 			 * to indicate PGE.  Fix it here.
4098ab5bb018SKeith M Wesolowski 			 */
4099ab5bb018SKeith M Wesolowski 			if (cpi->cpi_model == 0) {
4100ab5bb018SKeith M Wesolowski 				if (cp->cp_edx & 0x200) {
4101ab5bb018SKeith M Wesolowski 					cp->cp_edx &= ~0x200;
4102ab5bb018SKeith M Wesolowski 					cp->cp_edx |= CPUID_INTC_EDX_PGE;
4103ab5bb018SKeith M Wesolowski 				}
4104ab5bb018SKeith M Wesolowski 			}
4105ab5bb018SKeith M Wesolowski 
4106ab5bb018SKeith M Wesolowski 			/*
4107ab5bb018SKeith M Wesolowski 			 * Early models had problems w/ MMX; disable.
4108ab5bb018SKeith M Wesolowski 			 */
4109ab5bb018SKeith M Wesolowski 			if (cpi->cpi_model < 6)
4110ab5bb018SKeith M Wesolowski 				mask_edx &= ~CPUID_INTC_EDX_MMX;
4111ab5bb018SKeith M Wesolowski 		}
4112ab5bb018SKeith M Wesolowski 
4113ab5bb018SKeith M Wesolowski 		/*
4114ab5bb018SKeith M Wesolowski 		 * For newer families, SSE3 and CX16, at least, are valid;
4115ab5bb018SKeith M Wesolowski 		 * enable all
4116ab5bb018SKeith M Wesolowski 		 */
4117ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family >= 0xf)
4118ab5bb018SKeith M Wesolowski 			mask_ecx = 0xffffffff;
4119ab5bb018SKeith M Wesolowski 		/*
4120ab5bb018SKeith M Wesolowski 		 * We don't support MONITOR/MWAIT if leaf 5 is not available
4121ab5bb018SKeith M Wesolowski 		 * to obtain the monitor linesize.
4122ab5bb018SKeith M Wesolowski 		 */
4123ab5bb018SKeith M Wesolowski 		if (cpi->cpi_maxeax < 5)
4124ab5bb018SKeith M Wesolowski 			mask_ecx &= ~CPUID_INTC_ECX_MON;
4125ab5bb018SKeith M Wesolowski 
4126ab5bb018SKeith M Wesolowski #if !defined(__xpv)
4127ab5bb018SKeith M Wesolowski 		/*
4128ab5bb018SKeith M Wesolowski 		 * AMD has not historically used MWAIT in the CPU's idle loop.
4129ab5bb018SKeith M Wesolowski 		 * Pre-family-10h Opterons do not have the MWAIT instruction. We
4130ab5bb018SKeith M Wesolowski 		 * know for certain that in at least family 17h, per AMD, mwait
4131ab5bb018SKeith M Wesolowski 		 * is preferred. Families in-between are less certain.
4132ab5bb018SKeith M Wesolowski 		 */
4133ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family < 0x17) {
4134ab5bb018SKeith M Wesolowski 			idle_cpu_prefer_mwait = 0;
4135ab5bb018SKeith M Wesolowski 		}
4136ab5bb018SKeith M Wesolowski #endif
4137ab5bb018SKeith M Wesolowski 
4138ab5bb018SKeith M Wesolowski 		break;
4139ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
4140ab5bb018SKeith M Wesolowski 		/* Enable all for Hygon Dhyana CPU */
4141ab5bb018SKeith M Wesolowski 		mask_ecx = 0xffffffff;
4142ab5bb018SKeith M Wesolowski 		break;
4143ab5bb018SKeith M Wesolowski 	case X86_VENDOR_TM:
4144ab5bb018SKeith M Wesolowski 		/*
4145ab5bb018SKeith M Wesolowski 		 * workaround the NT workaround in CMS 4.1
4146ab5bb018SKeith M Wesolowski 		 */
4147ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 5 && cpi->cpi_model == 4 &&
4148ab5bb018SKeith M Wesolowski 		    (cpi->cpi_step == 2 || cpi->cpi_step == 3))
4149ab5bb018SKeith M Wesolowski 			cp->cp_edx |= CPUID_INTC_EDX_CX8;
4150ab5bb018SKeith M Wesolowski 		break;
4151ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Centaur:
4152ab5bb018SKeith M Wesolowski 		/*
4153ab5bb018SKeith M Wesolowski 		 * workaround the NT workarounds again
4154ab5bb018SKeith M Wesolowski 		 */
4155ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 6)
4156ab5bb018SKeith M Wesolowski 			cp->cp_edx |= CPUID_INTC_EDX_CX8;
4157ab5bb018SKeith M Wesolowski 		break;
4158ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Cyrix:
4159ab5bb018SKeith M Wesolowski 		/*
4160ab5bb018SKeith M Wesolowski 		 * We rely heavily on the probing in locore
4161ab5bb018SKeith M Wesolowski 		 * to actually figure out what parts, if any,
4162ab5bb018SKeith M Wesolowski 		 * of the Cyrix cpuid instruction to believe.
4163ab5bb018SKeith M Wesolowski 		 */
4164ab5bb018SKeith M Wesolowski 		switch (x86_type) {
4165ab5bb018SKeith M Wesolowski 		case X86_TYPE_CYRIX_486:
4166ab5bb018SKeith M Wesolowski 			mask_edx = 0;
4167ab5bb018SKeith M Wesolowski 			break;
4168ab5bb018SKeith M Wesolowski 		case X86_TYPE_CYRIX_6x86:
4169ab5bb018SKeith M Wesolowski 			mask_edx = 0;
4170ab5bb018SKeith M Wesolowski 			break;
4171ab5bb018SKeith M Wesolowski 		case X86_TYPE_CYRIX_6x86L:
4172ab5bb018SKeith M Wesolowski 			mask_edx =
4173ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_DE |
4174ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_CX8;
4175ab5bb018SKeith M Wesolowski 			break;
4176ab5bb018SKeith M Wesolowski 		case X86_TYPE_CYRIX_6x86MX:
4177ab5bb018SKeith M Wesolowski 			mask_edx =
4178ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_DE |
4179ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_MSR |
4180ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_CX8 |
4181ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_PGE |
4182ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_CMOV |
4183ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_MMX;
4184ab5bb018SKeith M Wesolowski 			break;
4185ab5bb018SKeith M Wesolowski 		case X86_TYPE_CYRIX_GXm:
4186ab5bb018SKeith M Wesolowski 			mask_edx =
4187ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_MSR |
4188ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_CX8 |
4189ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_CMOV |
4190ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_MMX;
4191ab5bb018SKeith M Wesolowski 			break;
4192ab5bb018SKeith M Wesolowski 		case X86_TYPE_CYRIX_MediaGX:
4193ab5bb018SKeith M Wesolowski 			break;
4194ab5bb018SKeith M Wesolowski 		case X86_TYPE_CYRIX_MII:
4195ab5bb018SKeith M Wesolowski 		case X86_TYPE_VIA_CYRIX_III:
4196ab5bb018SKeith M Wesolowski 			mask_edx =
4197ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_DE |
4198ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_TSC |
4199ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_MSR |
4200ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_CX8 |
4201ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_PGE |
4202ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_CMOV |
4203ab5bb018SKeith M Wesolowski 			    CPUID_INTC_EDX_MMX;
4204ab5bb018SKeith M Wesolowski 			break;
4205ab5bb018SKeith M Wesolowski 		default:
4206ab5bb018SKeith M Wesolowski 			break;
4207ab5bb018SKeith M Wesolowski 		}
4208ab5bb018SKeith M Wesolowski 		break;
4209ab5bb018SKeith M Wesolowski 	}
4210ab5bb018SKeith M Wesolowski 
4211ab5bb018SKeith M Wesolowski #if defined(__xpv)
4212ab5bb018SKeith M Wesolowski 	/*
4213ab5bb018SKeith M Wesolowski 	 * Do not support MONITOR/MWAIT under a hypervisor
4214ab5bb018SKeith M Wesolowski 	 */
4215ab5bb018SKeith M Wesolowski 	mask_ecx &= ~CPUID_INTC_ECX_MON;
4216ab5bb018SKeith M Wesolowski 	/*
4217ab5bb018SKeith M Wesolowski 	 * Do not support XSAVE under a hypervisor for now
4218ab5bb018SKeith M Wesolowski 	 */
4219ab5bb018SKeith M Wesolowski 	xsave_force_disable = B_TRUE;
4220ab5bb018SKeith M Wesolowski 
4221ab5bb018SKeith M Wesolowski #endif	/* __xpv */
4222ab5bb018SKeith M Wesolowski 
4223ab5bb018SKeith M Wesolowski 	if (xsave_force_disable) {
4224ab5bb018SKeith M Wesolowski 		mask_ecx &= ~CPUID_INTC_ECX_XSAVE;
4225ab5bb018SKeith M Wesolowski 		mask_ecx &= ~CPUID_INTC_ECX_AVX;
4226ab5bb018SKeith M Wesolowski 		mask_ecx &= ~CPUID_INTC_ECX_F16C;
4227ab5bb018SKeith M Wesolowski 		mask_ecx &= ~CPUID_INTC_ECX_FMA;
4228ab5bb018SKeith M Wesolowski 	}
4229ab5bb018SKeith M Wesolowski 
4230ab5bb018SKeith M Wesolowski 	/*
4231ab5bb018SKeith M Wesolowski 	 * Now we've figured out the masks that determine
4232ab5bb018SKeith M Wesolowski 	 * which bits we choose to believe, apply the masks
4233ab5bb018SKeith M Wesolowski 	 * to the feature words, then map the kernel's view
4234ab5bb018SKeith M Wesolowski 	 * of these feature words into its feature word.
4235ab5bb018SKeith M Wesolowski 	 */
4236ab5bb018SKeith M Wesolowski 	cp->cp_edx &= mask_edx;
4237ab5bb018SKeith M Wesolowski 	cp->cp_ecx &= mask_ecx;
4238ab5bb018SKeith M Wesolowski 
4239ab5bb018SKeith M Wesolowski 	/*
4240ab5bb018SKeith M Wesolowski 	 * apply any platform restrictions (we don't call this
4241ab5bb018SKeith M Wesolowski 	 * immediately after __cpuid_insn here, because we need the
4242ab5bb018SKeith M Wesolowski 	 * workarounds applied above first)
4243ab5bb018SKeith M Wesolowski 	 */
4244ab5bb018SKeith M Wesolowski 	platform_cpuid_mangle(cpi->cpi_vendor, 1, cp);
4245ab5bb018SKeith M Wesolowski 
4246ab5bb018SKeith M Wesolowski 	/*
4247ab5bb018SKeith M Wesolowski 	 * In addition to ecx and edx, Intel and AMD are storing a bunch of
424856726c7eSRobert Mustacchi 	 * instruction set extensions in leaf 7's ebx, ecx, and edx. Note, leaf
424956726c7eSRobert Mustacchi 	 * 7 has sub-leaves determined by ecx.
4250ab5bb018SKeith M Wesolowski 	 */
4251ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax >= 7) {
4252ab5bb018SKeith M Wesolowski 		struct cpuid_regs *ecp;
4253ab5bb018SKeith M Wesolowski 		ecp = &cpi->cpi_std[7];
4254ab5bb018SKeith M Wesolowski 		ecp->cp_eax = 7;
4255ab5bb018SKeith M Wesolowski 		ecp->cp_ecx = 0;
4256ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(ecp);
4257ab5bb018SKeith M Wesolowski 
4258ab5bb018SKeith M Wesolowski 		/*
4259ab5bb018SKeith M Wesolowski 		 * If XSAVE has been disabled, just ignore all of the
426056726c7eSRobert Mustacchi 		 * extended-save-area dependent flags here. By removing most of
426156726c7eSRobert Mustacchi 		 * the leaf 7, sub-leaf 0 flags, that will ensure tha we don't
426256726c7eSRobert Mustacchi 		 * end up looking at additional xsave dependent leaves right
426356726c7eSRobert Mustacchi 		 * now.
4264ab5bb018SKeith M Wesolowski 		 */
4265ab5bb018SKeith M Wesolowski 		if (xsave_force_disable) {
4266ab5bb018SKeith M Wesolowski 			ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_BMI1;
4267ab5bb018SKeith M Wesolowski 			ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_BMI2;
4268ab5bb018SKeith M Wesolowski 			ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_AVX2;
4269ab5bb018SKeith M Wesolowski 			ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_MPX;
4270ab5bb018SKeith M Wesolowski 			ecp->cp_ebx &= ~CPUID_INTC_EBX_7_0_ALL_AVX512;
4271ab5bb018SKeith M Wesolowski 			ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_ALL_AVX512;
4272ab5bb018SKeith M Wesolowski 			ecp->cp_edx &= ~CPUID_INTC_EDX_7_0_ALL_AVX512;
4273ab5bb018SKeith M Wesolowski 			ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_VAES;
4274ab5bb018SKeith M Wesolowski 			ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_VPCLMULQDQ;
427556726c7eSRobert Mustacchi 			ecp->cp_ecx &= ~CPUID_INTC_ECX_7_0_GFNI;
4276ab5bb018SKeith M Wesolowski 		}
4277ab5bb018SKeith M Wesolowski 
4278ab5bb018SKeith M Wesolowski 		if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_SMEP)
4279ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SMEP);
4280ab5bb018SKeith M Wesolowski 
4281ab5bb018SKeith M Wesolowski 		/*
4282ab5bb018SKeith M Wesolowski 		 * We check disable_smap here in addition to in startup_smap()
4283ab5bb018SKeith M Wesolowski 		 * to ensure CPUs that aren't the boot CPU don't accidentally
4284ab5bb018SKeith M Wesolowski 		 * include it in the feature set and thus generate a mismatched
4285ab5bb018SKeith M Wesolowski 		 * x86 feature set across CPUs.
4286ab5bb018SKeith M Wesolowski 		 */
4287ab5bb018SKeith M Wesolowski 		if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_SMAP &&
4288ab5bb018SKeith M Wesolowski 		    disable_smap == 0)
4289ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SMAP);
4290ab5bb018SKeith M Wesolowski 
4291ab5bb018SKeith M Wesolowski 		if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_RDSEED)
4292ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_RDSEED);
4293ab5bb018SKeith M Wesolowski 
4294ab5bb018SKeith M Wesolowski 		if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_ADX)
4295ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_ADX);
4296ab5bb018SKeith M Wesolowski 
4297ab5bb018SKeith M Wesolowski 		if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_FSGSBASE)
4298ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_FSGSBASE);
4299ab5bb018SKeith M Wesolowski 
4300ab5bb018SKeith M Wesolowski 		if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_CLFLUSHOPT)
4301ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_CLFLUSHOPT);
4302ab5bb018SKeith M Wesolowski 
4303ab5bb018SKeith M Wesolowski 		if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_INVPCID)
4304ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_INVPCID);
4305ab5bb018SKeith M Wesolowski 
4306ab5bb018SKeith M Wesolowski 		if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_UMIP)
4307ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_UMIP);
4308ab5bb018SKeith M Wesolowski 		if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_PKU)
4309ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_PKU);
4310ab5bb018SKeith M Wesolowski 		if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_OSPKE)
4311ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_OSPKE);
431256726c7eSRobert Mustacchi 		if (ecp->cp_ecx & CPUID_INTC_ECX_7_0_GFNI)
431356726c7eSRobert Mustacchi 			add_x86_feature(featureset, X86FSET_GFNI);
431456726c7eSRobert Mustacchi 
431556726c7eSRobert Mustacchi 		if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_CLWB)
431656726c7eSRobert Mustacchi 			add_x86_feature(featureset, X86FSET_CLWB);
4317ab5bb018SKeith M Wesolowski 
4318ab5bb018SKeith M Wesolowski 		if (cpi->cpi_vendor == X86_VENDOR_Intel) {
4319ab5bb018SKeith M Wesolowski 			if (ecp->cp_ebx & CPUID_INTC_EBX_7_0_MPX)
4320ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_MPX);
432156726c7eSRobert Mustacchi 		}
4322ab5bb018SKeith M Wesolowski 
432356726c7eSRobert Mustacchi 		/*
43245a9c36deSDan McDonald 		 * If we have subleaf 1 or 2 available, grab and store
43255a9c36deSDan McDonald 		 * that. This is used for more AVX and related features.
432656726c7eSRobert Mustacchi 		 */
432756726c7eSRobert Mustacchi 		if (ecp->cp_eax >= 1) {
432856726c7eSRobert Mustacchi 			struct cpuid_regs *c71;
432956726c7eSRobert Mustacchi 			c71 = &cpi->cpi_sub7[0];
433056726c7eSRobert Mustacchi 			c71->cp_eax = 7;
433156726c7eSRobert Mustacchi 			c71->cp_ecx = 1;
433256726c7eSRobert Mustacchi 			(void) __cpuid_insn(c71);
4333ab5bb018SKeith M Wesolowski 		}
43345a9c36deSDan McDonald 
43355a9c36deSDan McDonald 		/* Subleaf 2 has certain security indicators in it. */
43365a9c36deSDan McDonald 		if (ecp->cp_eax >= 2) {
43375a9c36deSDan McDonald 			struct cpuid_regs *c72;
43385a9c36deSDan McDonald 			c72 = &cpi->cpi_sub7[1];
43395a9c36deSDan McDonald 			c72->cp_eax = 7;
43405a9c36deSDan McDonald 			c72->cp_ecx = 2;
43415a9c36deSDan McDonald 			(void) __cpuid_insn(c72);
43425a9c36deSDan McDonald 		}
4343ab5bb018SKeith M Wesolowski 	}
4344ab5bb018SKeith M Wesolowski 
4345ab5bb018SKeith M Wesolowski 	/*
4346ab5bb018SKeith M Wesolowski 	 * fold in overrides from the "eeprom" mechanism
4347ab5bb018SKeith M Wesolowski 	 */
4348ab5bb018SKeith M Wesolowski 	cp->cp_edx |= cpuid_feature_edx_include;
4349ab5bb018SKeith M Wesolowski 	cp->cp_edx &= ~cpuid_feature_edx_exclude;
4350ab5bb018SKeith M Wesolowski 
4351ab5bb018SKeith M Wesolowski 	cp->cp_ecx |= cpuid_feature_ecx_include;
4352ab5bb018SKeith M Wesolowski 	cp->cp_ecx &= ~cpuid_feature_ecx_exclude;
4353ab5bb018SKeith M Wesolowski 
4354ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_PSE) {
4355ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_LARGEPAGE);
4356ab5bb018SKeith M Wesolowski 	}
4357ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_TSC) {
4358ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_TSC);
4359ab5bb018SKeith M Wesolowski 	}
4360ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_MSR) {
4361ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_MSR);
4362ab5bb018SKeith M Wesolowski 	}
4363ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_MTRR) {
4364ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_MTRR);
4365ab5bb018SKeith M Wesolowski 	}
4366ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_PGE) {
4367ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_PGE);
4368ab5bb018SKeith M Wesolowski 	}
4369ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_CMOV) {
4370ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_CMOV);
4371ab5bb018SKeith M Wesolowski 	}
4372ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_MMX) {
4373ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_MMX);
4374ab5bb018SKeith M Wesolowski 	}
4375ab5bb018SKeith M Wesolowski 	if ((cp->cp_edx & CPUID_INTC_EDX_MCE) != 0 &&
4376ab5bb018SKeith M Wesolowski 	    (cp->cp_edx & CPUID_INTC_EDX_MCA) != 0) {
4377ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_MCA);
4378ab5bb018SKeith M Wesolowski 	}
4379ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_PAE) {
4380ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_PAE);
4381ab5bb018SKeith M Wesolowski 	}
4382ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_CX8) {
4383ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_CX8);
4384ab5bb018SKeith M Wesolowski 	}
4385ab5bb018SKeith M Wesolowski 	if (cp->cp_ecx & CPUID_INTC_ECX_CX16) {
4386ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_CX16);
4387ab5bb018SKeith M Wesolowski 	}
4388ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_PAT) {
4389ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_PAT);
4390ab5bb018SKeith M Wesolowski 	}
4391ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_SEP) {
4392ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_SEP);
4393ab5bb018SKeith M Wesolowski 	}
4394ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_FXSR) {
4395ab5bb018SKeith M Wesolowski 		/*
4396ab5bb018SKeith M Wesolowski 		 * In our implementation, fxsave/fxrstor
4397ab5bb018SKeith M Wesolowski 		 * are prerequisites before we'll even
4398ab5bb018SKeith M Wesolowski 		 * try and do SSE things.
4399ab5bb018SKeith M Wesolowski 		 */
4400ab5bb018SKeith M Wesolowski 		if (cp->cp_edx & CPUID_INTC_EDX_SSE) {
4401ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SSE);
4402ab5bb018SKeith M Wesolowski 		}
4403ab5bb018SKeith M Wesolowski 		if (cp->cp_edx & CPUID_INTC_EDX_SSE2) {
4404ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SSE2);
4405ab5bb018SKeith M Wesolowski 		}
4406ab5bb018SKeith M Wesolowski 		if (cp->cp_ecx & CPUID_INTC_ECX_SSE3) {
4407ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SSE3);
4408ab5bb018SKeith M Wesolowski 		}
4409ab5bb018SKeith M Wesolowski 		if (cp->cp_ecx & CPUID_INTC_ECX_SSSE3) {
4410ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SSSE3);
4411ab5bb018SKeith M Wesolowski 		}
4412ab5bb018SKeith M Wesolowski 		if (cp->cp_ecx & CPUID_INTC_ECX_SSE4_1) {
4413ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SSE4_1);
4414ab5bb018SKeith M Wesolowski 		}
4415ab5bb018SKeith M Wesolowski 		if (cp->cp_ecx & CPUID_INTC_ECX_SSE4_2) {
4416ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SSE4_2);
4417ab5bb018SKeith M Wesolowski 		}
4418ab5bb018SKeith M Wesolowski 		if (cp->cp_ecx & CPUID_INTC_ECX_AES) {
4419ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_AES);
4420ab5bb018SKeith M Wesolowski 		}
4421ab5bb018SKeith M Wesolowski 		if (cp->cp_ecx & CPUID_INTC_ECX_PCLMULQDQ) {
4422ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_PCLMULQDQ);
4423ab5bb018SKeith M Wesolowski 		}
4424ab5bb018SKeith M Wesolowski 
4425ab5bb018SKeith M Wesolowski 		if (cpi->cpi_std[7].cp_ebx & CPUID_INTC_EBX_7_0_SHA)
4426ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_SHA);
4427ab5bb018SKeith M Wesolowski 
4428ab5bb018SKeith M Wesolowski 		if (cp->cp_ecx & CPUID_INTC_ECX_XSAVE) {
4429ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_XSAVE);
4430ab5bb018SKeith M Wesolowski 
4431ab5bb018SKeith M Wesolowski 			/* We only test AVX & AVX512 when there is XSAVE */
443256726c7eSRobert Mustacchi 			cpuid_basic_avx(cpu, featureset);
4433ab5bb018SKeith M Wesolowski 		}
4434ab5bb018SKeith M Wesolowski 	}
4435ab5bb018SKeith M Wesolowski 
4436ab5bb018SKeith M Wesolowski 	if (cp->cp_ecx & CPUID_INTC_ECX_PCID) {
4437ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_PCID);
4438ab5bb018SKeith M Wesolowski 	}
4439ab5bb018SKeith M Wesolowski 
4440ab5bb018SKeith M Wesolowski 	if (cp->cp_ecx & CPUID_INTC_ECX_X2APIC) {
4441ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_X2APIC);
4442ab5bb018SKeith M Wesolowski 	}
4443ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_DE) {
4444ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_DE);
4445ab5bb018SKeith M Wesolowski 	}
4446ab5bb018SKeith M Wesolowski #if !defined(__xpv)
4447ab5bb018SKeith M Wesolowski 	if (cp->cp_ecx & CPUID_INTC_ECX_MON) {
4448ab5bb018SKeith M Wesolowski 
4449ab5bb018SKeith M Wesolowski 		/*
4450ab5bb018SKeith M Wesolowski 		 * We require the CLFLUSH instruction for erratum workaround
4451ab5bb018SKeith M Wesolowski 		 * to use MONITOR/MWAIT.
4452ab5bb018SKeith M Wesolowski 		 */
4453ab5bb018SKeith M Wesolowski 		if (cp->cp_edx & CPUID_INTC_EDX_CLFSH) {
4454ab5bb018SKeith M Wesolowski 			cpi->cpi_mwait.support |= MWAIT_SUPPORT;
4455ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_MWAIT);
4456ab5bb018SKeith M Wesolowski 		} else {
4457ab5bb018SKeith M Wesolowski 			extern int idle_cpu_assert_cflush_monitor;
4458ab5bb018SKeith M Wesolowski 
4459ab5bb018SKeith M Wesolowski 			/*
4460ab5bb018SKeith M Wesolowski 			 * All processors we are aware of which have
4461ab5bb018SKeith M Wesolowski 			 * MONITOR/MWAIT also have CLFLUSH.
4462ab5bb018SKeith M Wesolowski 			 */
4463ab5bb018SKeith M Wesolowski 			if (idle_cpu_assert_cflush_monitor) {
4464ab5bb018SKeith M Wesolowski 				ASSERT((cp->cp_ecx & CPUID_INTC_ECX_MON) &&
4465ab5bb018SKeith M Wesolowski 				    (cp->cp_edx & CPUID_INTC_EDX_CLFSH));
4466ab5bb018SKeith M Wesolowski 			}
4467ab5bb018SKeith M Wesolowski 		}
4468ab5bb018SKeith M Wesolowski 	}
4469ab5bb018SKeith M Wesolowski #endif	/* __xpv */
4470ab5bb018SKeith M Wesolowski 
4471ab5bb018SKeith M Wesolowski 	if (cp->cp_ecx & CPUID_INTC_ECX_VMX) {
4472ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_VMX);
4473ab5bb018SKeith M Wesolowski 	}
4474ab5bb018SKeith M Wesolowski 
4475ab5bb018SKeith M Wesolowski 	if (cp->cp_ecx & CPUID_INTC_ECX_RDRAND)
4476ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_RDRAND);
4477ab5bb018SKeith M Wesolowski 
4478ab5bb018SKeith M Wesolowski 	/*
4479ab5bb018SKeith M Wesolowski 	 * Only need it first time, rest of the cpus would follow suit.
4480ab5bb018SKeith M Wesolowski 	 * we only capture this for the bootcpu.
4481ab5bb018SKeith M Wesolowski 	 */
4482ab5bb018SKeith M Wesolowski 	if (cp->cp_edx & CPUID_INTC_EDX_CLFSH) {
4483ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_CLFSH);
4484ab5bb018SKeith M Wesolowski 		x86_clflush_size = (BITX(cp->cp_ebx, 15, 8) * 8);
4485ab5bb018SKeith M Wesolowski 	}
4486ab5bb018SKeith M Wesolowski 	if (is_x86_feature(featureset, X86FSET_PAE))
4487ab5bb018SKeith M Wesolowski 		cpi->cpi_pabits = 36;
4488ab5bb018SKeith M Wesolowski 
4489ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax >= 0xD && !xsave_force_disable) {
4490ab5bb018SKeith M Wesolowski 		struct cpuid_regs r, *ecp;
4491ab5bb018SKeith M Wesolowski 
4492ab5bb018SKeith M Wesolowski 		ecp = &r;
4493ab5bb018SKeith M Wesolowski 		ecp->cp_eax = 0xD;
4494ab5bb018SKeith M Wesolowski 		ecp->cp_ecx = 1;
4495ab5bb018SKeith M Wesolowski 		ecp->cp_edx = ecp->cp_ebx = 0;
4496ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(ecp);
4497ab5bb018SKeith M Wesolowski 
4498ab5bb018SKeith M Wesolowski 		if (ecp->cp_eax & CPUID_INTC_EAX_D_1_XSAVEOPT)
4499ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_XSAVEOPT);
4500ab5bb018SKeith M Wesolowski 		if (ecp->cp_eax & CPUID_INTC_EAX_D_1_XSAVEC)
4501ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_XSAVEC);
4502ab5bb018SKeith M Wesolowski 		if (ecp->cp_eax & CPUID_INTC_EAX_D_1_XSAVES)
4503ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_XSAVES);
4504a65c38a3SRobert Mustacchi 
4505a65c38a3SRobert Mustacchi 		/*
4506a65c38a3SRobert Mustacchi 		 * Zen 2 family processors suffer from erratum 1386 that causes
4507a65c38a3SRobert Mustacchi 		 * xsaves to not function correctly in some circumstances. There
4508a65c38a3SRobert Mustacchi 		 * are no supervisor states in Zen 2 and earlier. Practically
4509a65c38a3SRobert Mustacchi 		 * speaking this has no impact for us as we currently do not
4510a65c38a3SRobert Mustacchi 		 * leverage compressed xsave formats. To safeguard against
4511a65c38a3SRobert Mustacchi 		 * issues in the future where we may opt to using it, we remove
4512a65c38a3SRobert Mustacchi 		 * it from the feature set now. While Matisse has a microcode
4513a65c38a3SRobert Mustacchi 		 * update available with a fix, not all Zen 2 CPUs do so it's
4514a65c38a3SRobert Mustacchi 		 * simpler for the moment to unconditionally remove it.
4515a65c38a3SRobert Mustacchi 		 */
4516a65c38a3SRobert Mustacchi 		if (cpi->cpi_vendor == X86_VENDOR_AMD &&
4517a65c38a3SRobert Mustacchi 		    uarchrev_uarch(cpi->cpi_uarchrev) <= X86_UARCH_AMD_ZEN2) {
4518a65c38a3SRobert Mustacchi 			remove_x86_feature(featureset, X86FSET_XSAVES);
4519a65c38a3SRobert Mustacchi 		}
4520ab5bb018SKeith M Wesolowski 	}
4521ab5bb018SKeith M Wesolowski 
4522ab5bb018SKeith M Wesolowski 	/*
4523ab5bb018SKeith M Wesolowski 	 * Work on the "extended" feature information, doing
4524ab5bb018SKeith M Wesolowski 	 * some basic initialization to be used in the extended pass.
4525ab5bb018SKeith M Wesolowski 	 */
4526ab5bb018SKeith M Wesolowski 	xcpuid = 0;
4527ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
4528ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
4529ab5bb018SKeith M Wesolowski 		/*
4530ab5bb018SKeith M Wesolowski 		 * On KVM we know we will have proper support for extended
4531ab5bb018SKeith M Wesolowski 		 * cpuid.
4532ab5bb018SKeith M Wesolowski 		 */
4533ab5bb018SKeith M Wesolowski 		if (IS_NEW_F6(cpi) || cpi->cpi_family >= 0xf ||
4534ab5bb018SKeith M Wesolowski 		    (get_hwenv() == HW_KVM && cpi->cpi_family == 6 &&
4535ab5bb018SKeith M Wesolowski 		    (cpi->cpi_model == 6 || cpi->cpi_model == 2)))
4536ab5bb018SKeith M Wesolowski 			xcpuid++;
4537ab5bb018SKeith M Wesolowski 		break;
4538ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
4539ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family > 5 ||
4540ab5bb018SKeith M Wesolowski 		    (cpi->cpi_family == 5 && cpi->cpi_model >= 1))
4541ab5bb018SKeith M Wesolowski 			xcpuid++;
4542ab5bb018SKeith M Wesolowski 		break;
4543ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Cyrix:
4544ab5bb018SKeith M Wesolowski 		/*
4545ab5bb018SKeith M Wesolowski 		 * Only these Cyrix CPUs are -known- to support
4546ab5bb018SKeith M Wesolowski 		 * extended cpuid operations.
4547ab5bb018SKeith M Wesolowski 		 */
4548ab5bb018SKeith M Wesolowski 		if (x86_type == X86_TYPE_VIA_CYRIX_III ||
4549ab5bb018SKeith M Wesolowski 		    x86_type == X86_TYPE_CYRIX_GXm)
4550ab5bb018SKeith M Wesolowski 			xcpuid++;
4551ab5bb018SKeith M Wesolowski 		break;
4552ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
4553ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Centaur:
4554ab5bb018SKeith M Wesolowski 	case X86_VENDOR_TM:
4555ab5bb018SKeith M Wesolowski 	default:
4556ab5bb018SKeith M Wesolowski 		xcpuid++;
4557ab5bb018SKeith M Wesolowski 		break;
4558ab5bb018SKeith M Wesolowski 	}
4559ab5bb018SKeith M Wesolowski 
4560ab5bb018SKeith M Wesolowski 	if (xcpuid) {
4561ab5bb018SKeith M Wesolowski 		cp = &cpi->cpi_extd[0];
4562ab5bb018SKeith M Wesolowski 		cp->cp_eax = CPUID_LEAF_EXT_0;
4563ab5bb018SKeith M Wesolowski 		cpi->cpi_xmaxeax = __cpuid_insn(cp);
4564ab5bb018SKeith M Wesolowski 	}
4565ab5bb018SKeith M Wesolowski 
4566ab5bb018SKeith M Wesolowski 	if (cpi->cpi_xmaxeax & CPUID_LEAF_EXT_0) {
4567ab5bb018SKeith M Wesolowski 
4568ab5bb018SKeith M Wesolowski 		if (cpi->cpi_xmaxeax > CPI_XMAXEAX_MAX)
4569ab5bb018SKeith M Wesolowski 			cpi->cpi_xmaxeax = CPI_XMAXEAX_MAX;
4570ab5bb018SKeith M Wesolowski 
4571ab5bb018SKeith M Wesolowski 		switch (cpi->cpi_vendor) {
4572ab5bb018SKeith M Wesolowski 		case X86_VENDOR_Intel:
4573ab5bb018SKeith M Wesolowski 		case X86_VENDOR_AMD:
4574ab5bb018SKeith M Wesolowski 		case X86_VENDOR_HYGON:
4575ab5bb018SKeith M Wesolowski 			if (cpi->cpi_xmaxeax < 0x80000001)
4576ab5bb018SKeith M Wesolowski 				break;
4577ab5bb018SKeith M Wesolowski 			cp = &cpi->cpi_extd[1];
4578ab5bb018SKeith M Wesolowski 			cp->cp_eax = 0x80000001;
4579ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(cp);
4580ab5bb018SKeith M Wesolowski 
4581ab5bb018SKeith M Wesolowski 			if (cpi->cpi_vendor == X86_VENDOR_AMD &&
4582ab5bb018SKeith M Wesolowski 			    cpi->cpi_family == 5 &&
4583ab5bb018SKeith M Wesolowski 			    cpi->cpi_model == 6 &&
4584ab5bb018SKeith M Wesolowski 			    cpi->cpi_step == 6) {
4585ab5bb018SKeith M Wesolowski 				/*
4586ab5bb018SKeith M Wesolowski 				 * K6 model 6 uses bit 10 to indicate SYSC
4587ab5bb018SKeith M Wesolowski 				 * Later models use bit 11. Fix it here.
4588ab5bb018SKeith M Wesolowski 				 */
4589ab5bb018SKeith M Wesolowski 				if (cp->cp_edx & 0x400) {
4590ab5bb018SKeith M Wesolowski 					cp->cp_edx &= ~0x400;
4591ab5bb018SKeith M Wesolowski 					cp->cp_edx |= CPUID_AMD_EDX_SYSC;
4592ab5bb018SKeith M Wesolowski 				}
4593ab5bb018SKeith M Wesolowski 			}
4594ab5bb018SKeith M Wesolowski 
4595ab5bb018SKeith M Wesolowski 			platform_cpuid_mangle(cpi->cpi_vendor, 0x80000001, cp);
4596ab5bb018SKeith M Wesolowski 
4597ab5bb018SKeith M Wesolowski 			/*
4598ab5bb018SKeith M Wesolowski 			 * Compute the additions to the kernel's feature word.
4599ab5bb018SKeith M Wesolowski 			 */
4600ab5bb018SKeith M Wesolowski 			if (cp->cp_edx & CPUID_AMD_EDX_NX) {
4601ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_NX);
4602ab5bb018SKeith M Wesolowski 			}
4603ab5bb018SKeith M Wesolowski 
4604ab5bb018SKeith M Wesolowski 			/*
4605ab5bb018SKeith M Wesolowski 			 * Regardless whether or not we boot 64-bit,
4606ab5bb018SKeith M Wesolowski 			 * we should have a way to identify whether
4607ab5bb018SKeith M Wesolowski 			 * the CPU is capable of running 64-bit.
4608ab5bb018SKeith M Wesolowski 			 */
4609ab5bb018SKeith M Wesolowski 			if (cp->cp_edx & CPUID_AMD_EDX_LM) {
4610ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_64);
4611ab5bb018SKeith M Wesolowski 			}
4612ab5bb018SKeith M Wesolowski 
4613ab5bb018SKeith M Wesolowski 			/* 1 GB large page - enable only for 64 bit kernel */
4614ab5bb018SKeith M Wesolowski 			if (cp->cp_edx & CPUID_AMD_EDX_1GPG) {
4615ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_1GPG);
4616ab5bb018SKeith M Wesolowski 			}
4617ab5bb018SKeith M Wesolowski 
4618ab5bb018SKeith M Wesolowski 			if ((cpi->cpi_vendor == X86_VENDOR_AMD ||
4619ab5bb018SKeith M Wesolowski 			    cpi->cpi_vendor == X86_VENDOR_HYGON) &&
4620ab5bb018SKeith M Wesolowski 			    (cpi->cpi_std[1].cp_edx & CPUID_INTC_EDX_FXSR) &&
4621ab5bb018SKeith M Wesolowski 			    (cp->cp_ecx & CPUID_AMD_ECX_SSE4A)) {
4622ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_SSE4A);
4623ab5bb018SKeith M Wesolowski 			}
4624ab5bb018SKeith M Wesolowski 
4625ab5bb018SKeith M Wesolowski 			/*
4626ab5bb018SKeith M Wesolowski 			 * It's really tricky to support syscall/sysret in
4627ab5bb018SKeith M Wesolowski 			 * the i386 kernel; we rely on sysenter/sysexit
4628ab5bb018SKeith M Wesolowski 			 * instead.  In the amd64 kernel, things are -way-
4629ab5bb018SKeith M Wesolowski 			 * better.
4630ab5bb018SKeith M Wesolowski 			 */
4631ab5bb018SKeith M Wesolowski 			if (cp->cp_edx & CPUID_AMD_EDX_SYSC) {
4632ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_ASYSC);
4633ab5bb018SKeith M Wesolowski 			}
4634ab5bb018SKeith M Wesolowski 
4635ab5bb018SKeith M Wesolowski 			/*
4636ab5bb018SKeith M Wesolowski 			 * While we're thinking about system calls, note
4637ab5bb018SKeith M Wesolowski 			 * that AMD processors don't support sysenter
4638ab5bb018SKeith M Wesolowski 			 * in long mode at all, so don't try to program them.
4639ab5bb018SKeith M Wesolowski 			 */
4640ab5bb018SKeith M Wesolowski 			if (x86_vendor == X86_VENDOR_AMD ||
4641ab5bb018SKeith M Wesolowski 			    x86_vendor == X86_VENDOR_HYGON) {
4642ab5bb018SKeith M Wesolowski 				remove_x86_feature(featureset, X86FSET_SEP);
4643ab5bb018SKeith M Wesolowski 			}
4644ab5bb018SKeith M Wesolowski 
4645ab5bb018SKeith M Wesolowski 			if (cp->cp_edx & CPUID_AMD_EDX_TSCP) {
4646ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_TSCP);
4647ab5bb018SKeith M Wesolowski 			}
4648ab5bb018SKeith M Wesolowski 
4649ab5bb018SKeith M Wesolowski 			if (cp->cp_ecx & CPUID_AMD_ECX_SVM) {
4650ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_SVM);
4651ab5bb018SKeith M Wesolowski 			}
4652ab5bb018SKeith M Wesolowski 
4653ab5bb018SKeith M Wesolowski 			if (cp->cp_ecx & CPUID_AMD_ECX_TOPOEXT) {
4654ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_TOPOEXT);
4655ab5bb018SKeith M Wesolowski 			}
4656ab5bb018SKeith M Wesolowski 
4657ab5bb018SKeith M Wesolowski 			if (cp->cp_ecx & CPUID_AMD_ECX_PCEC) {
4658ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_AMD_PCEC);
4659ab5bb018SKeith M Wesolowski 			}
4660ab5bb018SKeith M Wesolowski 
4661ab5bb018SKeith M Wesolowski 			if (cp->cp_ecx & CPUID_AMD_ECX_XOP) {
4662ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_XOP);
4663ab5bb018SKeith M Wesolowski 			}
4664ab5bb018SKeith M Wesolowski 
4665ab5bb018SKeith M Wesolowski 			if (cp->cp_ecx & CPUID_AMD_ECX_FMA4) {
4666ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_FMA4);
4667ab5bb018SKeith M Wesolowski 			}
4668ab5bb018SKeith M Wesolowski 
4669ab5bb018SKeith M Wesolowski 			if (cp->cp_ecx & CPUID_AMD_ECX_TBM) {
4670ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_TBM);
4671ab5bb018SKeith M Wesolowski 			}
4672ab5bb018SKeith M Wesolowski 
4673ab5bb018SKeith M Wesolowski 			if (cp->cp_ecx & CPUID_AMD_ECX_MONITORX) {
4674ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_MONITORX);
4675ab5bb018SKeith M Wesolowski 			}
4676ab5bb018SKeith M Wesolowski 			break;
4677ab5bb018SKeith M Wesolowski 		default:
4678ab5bb018SKeith M Wesolowski 			break;
4679ab5bb018SKeith M Wesolowski 		}
4680ab5bb018SKeith M Wesolowski 
4681ab5bb018SKeith M Wesolowski 		/*
4682ab5bb018SKeith M Wesolowski 		 * Get CPUID data about processor cores and hyperthreads.
4683ab5bb018SKeith M Wesolowski 		 */
4684ab5bb018SKeith M Wesolowski 		switch (cpi->cpi_vendor) {
4685ab5bb018SKeith M Wesolowski 		case X86_VENDOR_Intel:
4686ab5bb018SKeith M Wesolowski 			if (cpi->cpi_maxeax >= 4) {
4687ab5bb018SKeith M Wesolowski 				cp = &cpi->cpi_std[4];
4688ab5bb018SKeith M Wesolowski 				cp->cp_eax = 4;
4689ab5bb018SKeith M Wesolowski 				cp->cp_ecx = 0;
4690ab5bb018SKeith M Wesolowski 				(void) __cpuid_insn(cp);
4691ab5bb018SKeith M Wesolowski 				platform_cpuid_mangle(cpi->cpi_vendor, 4, cp);
4692ab5bb018SKeith M Wesolowski 			}
4693ab5bb018SKeith M Wesolowski 			/*FALLTHROUGH*/
4694ab5bb018SKeith M Wesolowski 		case X86_VENDOR_AMD:
4695ab5bb018SKeith M Wesolowski 		case X86_VENDOR_HYGON:
4696ab5bb018SKeith M Wesolowski 			if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_8)
4697ab5bb018SKeith M Wesolowski 				break;
4698ab5bb018SKeith M Wesolowski 			cp = &cpi->cpi_extd[8];
4699ab5bb018SKeith M Wesolowski 			cp->cp_eax = CPUID_LEAF_EXT_8;
4700ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(cp);
4701ab5bb018SKeith M Wesolowski 			platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_8,
4702ab5bb018SKeith M Wesolowski 			    cp);
4703ab5bb018SKeith M Wesolowski 
4704ab5bb018SKeith M Wesolowski 			/*
4705ab5bb018SKeith M Wesolowski 			 * AMD uses ebx for some extended functions.
4706ab5bb018SKeith M Wesolowski 			 */
4707ab5bb018SKeith M Wesolowski 			if (cpi->cpi_vendor == X86_VENDOR_AMD ||
4708ab5bb018SKeith M Wesolowski 			    cpi->cpi_vendor == X86_VENDOR_HYGON) {
4709ab5bb018SKeith M Wesolowski 				/*
4710ab5bb018SKeith M Wesolowski 				 * While we're here, check for the AMD "Error
4711ab5bb018SKeith M Wesolowski 				 * Pointer Zero/Restore" feature. This can be
4712ab5bb018SKeith M Wesolowski 				 * used to setup the FP save handlers
4713ab5bb018SKeith M Wesolowski 				 * appropriately.
4714ab5bb018SKeith M Wesolowski 				 */
4715ab5bb018SKeith M Wesolowski 				if (cp->cp_ebx & CPUID_AMD_EBX_ERR_PTR_ZERO) {
4716ab5bb018SKeith M Wesolowski 					cpi->cpi_fp_amd_save = 0;
4717ab5bb018SKeith M Wesolowski 				} else {
4718ab5bb018SKeith M Wesolowski 					cpi->cpi_fp_amd_save = 1;
4719ab5bb018SKeith M Wesolowski 				}
4720ab5bb018SKeith M Wesolowski 
4721ab5bb018SKeith M Wesolowski 				if (cp->cp_ebx & CPUID_AMD_EBX_CLZERO) {
4722ab5bb018SKeith M Wesolowski 					add_x86_feature(featureset,
4723ab5bb018SKeith M Wesolowski 					    X86FSET_CLZERO);
4724ab5bb018SKeith M Wesolowski 				}
4725ab5bb018SKeith M Wesolowski 			}
4726ab5bb018SKeith M Wesolowski 
4727ab5bb018SKeith M Wesolowski 			/*
4728ab5bb018SKeith M Wesolowski 			 * Virtual and physical address limits from
4729ab5bb018SKeith M Wesolowski 			 * cpuid override previously guessed values.
4730ab5bb018SKeith M Wesolowski 			 */
4731ab5bb018SKeith M Wesolowski 			cpi->cpi_pabits = BITX(cp->cp_eax, 7, 0);
4732ab5bb018SKeith M Wesolowski 			cpi->cpi_vabits = BITX(cp->cp_eax, 15, 8);
4733ab5bb018SKeith M Wesolowski 			break;
4734ab5bb018SKeith M Wesolowski 		default:
4735ab5bb018SKeith M Wesolowski 			break;
4736ab5bb018SKeith M Wesolowski 		}
4737ab5bb018SKeith M Wesolowski 
4738ab5bb018SKeith M Wesolowski 		/*
4739ab5bb018SKeith M Wesolowski 		 * Get CPUID data about TSC Invariance in Deep C-State.
4740ab5bb018SKeith M Wesolowski 		 */
4741ab5bb018SKeith M Wesolowski 		switch (cpi->cpi_vendor) {
4742ab5bb018SKeith M Wesolowski 		case X86_VENDOR_Intel:
4743ab5bb018SKeith M Wesolowski 		case X86_VENDOR_AMD:
4744ab5bb018SKeith M Wesolowski 		case X86_VENDOR_HYGON:
4745ab5bb018SKeith M Wesolowski 			if (cpi->cpi_maxeax >= 7) {
4746ab5bb018SKeith M Wesolowski 				cp = &cpi->cpi_extd[7];
4747ab5bb018SKeith M Wesolowski 				cp->cp_eax = 0x80000007;
4748ab5bb018SKeith M Wesolowski 				cp->cp_ecx = 0;
4749ab5bb018SKeith M Wesolowski 				(void) __cpuid_insn(cp);
4750ab5bb018SKeith M Wesolowski 			}
4751ab5bb018SKeith M Wesolowski 			break;
4752ab5bb018SKeith M Wesolowski 		default:
4753ab5bb018SKeith M Wesolowski 			break;
4754ab5bb018SKeith M Wesolowski 		}
4755ab5bb018SKeith M Wesolowski 	}
4756ab5bb018SKeith M Wesolowski 
4757ab5bb018SKeith M Wesolowski 	/*
4758ab5bb018SKeith M Wesolowski 	 * cpuid_basic_ppin assumes that cpuid_basic_topology has already been
4759ab5bb018SKeith M Wesolowski 	 * run and thus gathered some of its dependent leaves.
4760ab5bb018SKeith M Wesolowski 	 */
4761ab5bb018SKeith M Wesolowski 	cpuid_basic_topology(cpu, featureset);
4762ab5bb018SKeith M Wesolowski 	cpuid_basic_thermal(cpu, featureset);
4763ab5bb018SKeith M Wesolowski #if !defined(__xpv)
4764ab5bb018SKeith M Wesolowski 	cpuid_basic_ppin(cpu, featureset);
4765ab5bb018SKeith M Wesolowski #endif
4766ab5bb018SKeith M Wesolowski 
4767ab5bb018SKeith M Wesolowski 	if (cpi->cpi_vendor == X86_VENDOR_AMD ||
4768ab5bb018SKeith M Wesolowski 	    cpi->cpi_vendor == X86_VENDOR_HYGON) {
4769ab5bb018SKeith M Wesolowski 		if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_8 &&
4770ab5bb018SKeith M Wesolowski 		    cpi->cpi_extd[8].cp_ebx & CPUID_AMD_EBX_ERR_PTR_ZERO) {
4771ab5bb018SKeith M Wesolowski 			/* Special handling for AMD FP not necessary. */
4772ab5bb018SKeith M Wesolowski 			cpi->cpi_fp_amd_save = 0;
4773ab5bb018SKeith M Wesolowski 		} else {
4774ab5bb018SKeith M Wesolowski 			cpi->cpi_fp_amd_save = 1;
4775ab5bb018SKeith M Wesolowski 		}
4776ab5bb018SKeith M Wesolowski 	}
4777ab5bb018SKeith M Wesolowski 
4778ab5bb018SKeith M Wesolowski 	/*
4779ab5bb018SKeith M Wesolowski 	 * Check (and potentially set) if lfence is serializing.
4780ab5bb018SKeith M Wesolowski 	 * This is useful for accurate rdtsc measurements and AMD retpolines.
4781ab5bb018SKeith M Wesolowski 	 */
4782ab5bb018SKeith M Wesolowski 	if ((cpi->cpi_vendor == X86_VENDOR_AMD ||
4783ab5bb018SKeith M Wesolowski 	    cpi->cpi_vendor == X86_VENDOR_HYGON) &&
4784ab5bb018SKeith M Wesolowski 	    is_x86_feature(featureset, X86FSET_SSE2)) {
4785ab5bb018SKeith M Wesolowski 		/*
4786ab5bb018SKeith M Wesolowski 		 * The AMD white paper Software Techniques For Managing
4787ab5bb018SKeith M Wesolowski 		 * Speculation on AMD Processors details circumstances for when
4788ab5bb018SKeith M Wesolowski 		 * lfence instructions are serializing.
4789ab5bb018SKeith M Wesolowski 		 *
4790ab5bb018SKeith M Wesolowski 		 * On family 0xf and 0x11, it is inherently so.  On family 0x10
4791ab5bb018SKeith M Wesolowski 		 * and later (excluding 0x11), a bit in the DE_CFG MSR
4792ab5bb018SKeith M Wesolowski 		 * determines the lfence behavior.  Per that whitepaper, AMD has
4793ab5bb018SKeith M Wesolowski 		 * committed to supporting that MSR on all later CPUs.
4794ab5bb018SKeith M Wesolowski 		 */
4795ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 0xf || cpi->cpi_family == 0x11) {
4796ab5bb018SKeith M Wesolowski 			add_x86_feature(featureset, X86FSET_LFENCE_SER);
4797ab5bb018SKeith M Wesolowski 		} else if (cpi->cpi_family >= 0x10) {
4798ab5bb018SKeith M Wesolowski #if !defined(__xpv)
4799ab5bb018SKeith M Wesolowski 			uint64_t val;
4800ab5bb018SKeith M Wesolowski 
4801ab5bb018SKeith M Wesolowski 			/*
4802ab5bb018SKeith M Wesolowski 			 * Be careful when attempting to enable the bit, and
4803ab5bb018SKeith M Wesolowski 			 * verify that it was actually set in case we are
4804ab5bb018SKeith M Wesolowski 			 * running in a hypervisor which is less than faithful
4805ab5bb018SKeith M Wesolowski 			 * about its emulation of this feature.
4806ab5bb018SKeith M Wesolowski 			 */
4807ab5bb018SKeith M Wesolowski 			on_trap_data_t otd;
4808ab5bb018SKeith M Wesolowski 			if (!on_trap(&otd, OT_DATA_ACCESS)) {
4809ab5bb018SKeith M Wesolowski 				val = rdmsr(MSR_AMD_DE_CFG);
4810ab5bb018SKeith M Wesolowski 				val |= AMD_DE_CFG_LFENCE_DISPATCH;
4811ab5bb018SKeith M Wesolowski 				wrmsr(MSR_AMD_DE_CFG, val);
4812ab5bb018SKeith M Wesolowski 				val = rdmsr(MSR_AMD_DE_CFG);
4813ab5bb018SKeith M Wesolowski 			} else {
4814ab5bb018SKeith M Wesolowski 				val = 0;
4815ab5bb018SKeith M Wesolowski 			}
4816ab5bb018SKeith M Wesolowski 			no_trap();
4817ab5bb018SKeith M Wesolowski 
4818ab5bb018SKeith M Wesolowski 			if ((val & AMD_DE_CFG_LFENCE_DISPATCH) != 0) {
4819ab5bb018SKeith M Wesolowski 				add_x86_feature(featureset, X86FSET_LFENCE_SER);
4820ab5bb018SKeith M Wesolowski 			}
4821ab5bb018SKeith M Wesolowski #endif
4822ab5bb018SKeith M Wesolowski 		}
4823ab5bb018SKeith M Wesolowski 	} else if (cpi->cpi_vendor == X86_VENDOR_Intel &&
4824ab5bb018SKeith M Wesolowski 	    is_x86_feature(featureset, X86FSET_SSE2)) {
4825ab5bb018SKeith M Wesolowski 		/*
4826ab5bb018SKeith M Wesolowski 		 * Documentation and other OSes indicate that lfence is always
4827ab5bb018SKeith M Wesolowski 		 * serializing on Intel CPUs.
4828ab5bb018SKeith M Wesolowski 		 */
4829ab5bb018SKeith M Wesolowski 		add_x86_feature(featureset, X86FSET_LFENCE_SER);
4830ab5bb018SKeith M Wesolowski 	}
4831ab5bb018SKeith M Wesolowski 
4832ab5bb018SKeith M Wesolowski 
4833ab5bb018SKeith M Wesolowski 	/*
4834651a12cbSRobert Mustacchi 	 * Check the processor leaves that are used for security features. Grab
4835651a12cbSRobert Mustacchi 	 * any additional processor-specific leaves that we may not have yet.
4836ab5bb018SKeith M Wesolowski 	 */
4837651a12cbSRobert Mustacchi 	switch (cpi->cpi_vendor) {
4838651a12cbSRobert Mustacchi 	case X86_VENDOR_AMD:
4839651a12cbSRobert Mustacchi 	case X86_VENDOR_HYGON:
4840651a12cbSRobert Mustacchi 		if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_21) {
4841a8cd71ddSLuqman Aden 			cp = &cpi->cpi_extd[0x21];
4842651a12cbSRobert Mustacchi 			cp->cp_eax = CPUID_LEAF_EXT_21;
4843651a12cbSRobert Mustacchi 			cp->cp_ecx = 0;
4844651a12cbSRobert Mustacchi 			(void) __cpuid_insn(cp);
4845651a12cbSRobert Mustacchi 		}
4846651a12cbSRobert Mustacchi 		break;
4847651a12cbSRobert Mustacchi 	default:
4848651a12cbSRobert Mustacchi 		break;
4849651a12cbSRobert Mustacchi 	}
4850651a12cbSRobert Mustacchi 
4851ab5bb018SKeith M Wesolowski 	cpuid_scan_security(cpu, featureset);
4852ab5bb018SKeith M Wesolowski }
4853ab5bb018SKeith M Wesolowski 
4854ab5bb018SKeith M Wesolowski /*
4855ab5bb018SKeith M Wesolowski  * Make copies of the cpuid table entries we depend on, in
4856ab5bb018SKeith M Wesolowski  * part for ease of parsing now, in part so that we have only
4857ab5bb018SKeith M Wesolowski  * one place to correct any of it, in part for ease of
4858ab5bb018SKeith M Wesolowski  * later export to userland, and in part so we can look at
4859ab5bb018SKeith M Wesolowski  * this stuff in a crash dump.
4860ab5bb018SKeith M Wesolowski  */
4861ab5bb018SKeith M Wesolowski 
4862ab5bb018SKeith M Wesolowski static void
cpuid_pass_extended(cpu_t * cpu,void * _arg __unused)4863ab5bb018SKeith M Wesolowski cpuid_pass_extended(cpu_t *cpu, void *_arg __unused)
4864ab5bb018SKeith M Wesolowski {
4865ab5bb018SKeith M Wesolowski 	uint_t n, nmax;
4866ab5bb018SKeith M Wesolowski 	int i;
4867ab5bb018SKeith M Wesolowski 	struct cpuid_regs *cp;
4868ab5bb018SKeith M Wesolowski 	uint8_t *dp;
4869ab5bb018SKeith M Wesolowski 	uint32_t *iptr;
4870ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
4871ab5bb018SKeith M Wesolowski 
4872ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax < 1)
4873ab5bb018SKeith M Wesolowski 		return;
4874ab5bb018SKeith M Wesolowski 
4875ab5bb018SKeith M Wesolowski 	if ((nmax = cpi->cpi_maxeax + 1) > NMAX_CPI_STD)
4876ab5bb018SKeith M Wesolowski 		nmax = NMAX_CPI_STD;
4877ab5bb018SKeith M Wesolowski 	/*
4878ab5bb018SKeith M Wesolowski 	 * (We already handled n == 0 and n == 1 in the basic pass)
4879ab5bb018SKeith M Wesolowski 	 */
4880ab5bb018SKeith M Wesolowski 	for (n = 2, cp = &cpi->cpi_std[2]; n < nmax; n++, cp++) {
4881ab5bb018SKeith M Wesolowski 		/*
4882ab5bb018SKeith M Wesolowski 		 * leaves 6 and 7 were handled in the basic pass
4883ab5bb018SKeith M Wesolowski 		 */
4884ab5bb018SKeith M Wesolowski 		if (n == 6 || n == 7)
4885ab5bb018SKeith M Wesolowski 			continue;
4886ab5bb018SKeith M Wesolowski 
4887ab5bb018SKeith M Wesolowski 		cp->cp_eax = n;
4888ab5bb018SKeith M Wesolowski 
4889ab5bb018SKeith M Wesolowski 		/*
4890ab5bb018SKeith M Wesolowski 		 * CPUID function 4 expects %ecx to be initialized
4891ab5bb018SKeith M Wesolowski 		 * with an index which indicates which cache to return
4892ab5bb018SKeith M Wesolowski 		 * information about. The OS is expected to call function 4
4893ab5bb018SKeith M Wesolowski 		 * with %ecx set to 0, 1, 2, ... until it returns with
4894ab5bb018SKeith M Wesolowski 		 * EAX[4:0] set to 0, which indicates there are no more
4895ab5bb018SKeith M Wesolowski 		 * caches.
4896ab5bb018SKeith M Wesolowski 		 *
4897ab5bb018SKeith M Wesolowski 		 * Here, populate cpi_std[4] with the information returned by
4898ab5bb018SKeith M Wesolowski 		 * function 4 when %ecx == 0, and do the rest in a later pass
4899ab5bb018SKeith M Wesolowski 		 * when dynamic memory allocation becomes available.
4900ab5bb018SKeith M Wesolowski 		 *
4901ab5bb018SKeith M Wesolowski 		 * Note: we need to explicitly initialize %ecx here, since
4902ab5bb018SKeith M Wesolowski 		 * function 4 may have been previously invoked.
4903ab5bb018SKeith M Wesolowski 		 */
4904ab5bb018SKeith M Wesolowski 		if (n == 4)
4905ab5bb018SKeith M Wesolowski 			cp->cp_ecx = 0;
4906ab5bb018SKeith M Wesolowski 
4907ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(cp);
4908ab5bb018SKeith M Wesolowski 		platform_cpuid_mangle(cpi->cpi_vendor, n, cp);
4909ab5bb018SKeith M Wesolowski 		switch (n) {
4910ab5bb018SKeith M Wesolowski 		case 2:
4911ab5bb018SKeith M Wesolowski 			/*
4912ab5bb018SKeith M Wesolowski 			 * "the lower 8 bits of the %eax register
4913ab5bb018SKeith M Wesolowski 			 * contain a value that identifies the number
4914ab5bb018SKeith M Wesolowski 			 * of times the cpuid [instruction] has to be
4915ab5bb018SKeith M Wesolowski 			 * executed to obtain a complete image of the
4916ab5bb018SKeith M Wesolowski 			 * processor's caching systems."
4917ab5bb018SKeith M Wesolowski 			 *
4918ab5bb018SKeith M Wesolowski 			 * How *do* they make this stuff up?
4919ab5bb018SKeith M Wesolowski 			 */
4920ab5bb018SKeith M Wesolowski 			cpi->cpi_ncache = sizeof (*cp) *
4921ab5bb018SKeith M Wesolowski 			    BITX(cp->cp_eax, 7, 0);
4922ab5bb018SKeith M Wesolowski 			if (cpi->cpi_ncache == 0)
4923ab5bb018SKeith M Wesolowski 				break;
4924ab5bb018SKeith M Wesolowski 			cpi->cpi_ncache--;	/* skip count byte */
4925ab5bb018SKeith M Wesolowski 
4926ab5bb018SKeith M Wesolowski 			/*
4927ab5bb018SKeith M Wesolowski 			 * Well, for now, rather than attempt to implement
4928ab5bb018SKeith M Wesolowski 			 * this slightly dubious algorithm, we just look
4929ab5bb018SKeith M Wesolowski 			 * at the first 15 ..
4930ab5bb018SKeith M Wesolowski 			 */
4931ab5bb018SKeith M Wesolowski 			if (cpi->cpi_ncache > (sizeof (*cp) - 1))
4932ab5bb018SKeith M Wesolowski 				cpi->cpi_ncache = sizeof (*cp) - 1;
4933ab5bb018SKeith M Wesolowski 
4934ab5bb018SKeith M Wesolowski 			dp = cpi->cpi_cacheinfo;
4935ab5bb018SKeith M Wesolowski 			if (BITX(cp->cp_eax, 31, 31) == 0) {
4936ab5bb018SKeith M Wesolowski 				uint8_t *p = (void *)&cp->cp_eax;
4937ab5bb018SKeith M Wesolowski 				for (i = 1; i < 4; i++)
4938ab5bb018SKeith M Wesolowski 					if (p[i] != 0)
4939ab5bb018SKeith M Wesolowski 						*dp++ = p[i];
4940ab5bb018SKeith M Wesolowski 			}
4941ab5bb018SKeith M Wesolowski 			if (BITX(cp->cp_ebx, 31, 31) == 0) {
4942ab5bb018SKeith M Wesolowski 				uint8_t *p = (void *)&cp->cp_ebx;
4943ab5bb018SKeith M Wesolowski 				for (i = 0; i < 4; i++)
4944ab5bb018SKeith M Wesolowski 					if (p[i] != 0)
4945ab5bb018SKeith M Wesolowski 						*dp++ = p[i];
4946ab5bb018SKeith M Wesolowski 			}
4947ab5bb018SKeith M Wesolowski 			if (BITX(cp->cp_ecx, 31, 31) == 0) {
4948ab5bb018SKeith M Wesolowski 				uint8_t *p = (void *)&cp->cp_ecx;
4949ab5bb018SKeith M Wesolowski 				for (i = 0; i < 4; i++)
4950ab5bb018SKeith M Wesolowski 					if (p[i] != 0)
4951ab5bb018SKeith M Wesolowski 						*dp++ = p[i];
4952ab5bb018SKeith M Wesolowski 			}
4953ab5bb018SKeith M Wesolowski 			if (BITX(cp->cp_edx, 31, 31) == 0) {
4954ab5bb018SKeith M Wesolowski 				uint8_t *p = (void *)&cp->cp_edx;
4955ab5bb018SKeith M Wesolowski 				for (i = 0; i < 4; i++)
4956ab5bb018SKeith M Wesolowski 					if (p[i] != 0)
4957ab5bb018SKeith M Wesolowski 						*dp++ = p[i];
4958ab5bb018SKeith M Wesolowski 			}
4959ab5bb018SKeith M Wesolowski 			break;
4960ab5bb018SKeith M Wesolowski 
4961ab5bb018SKeith M Wesolowski 		case 3:	/* Processor serial number, if PSN supported */
4962ab5bb018SKeith M Wesolowski 			break;
4963ab5bb018SKeith M Wesolowski 
4964ab5bb018SKeith M Wesolowski 		case 4:	/* Deterministic cache parameters */
4965ab5bb018SKeith M Wesolowski 			break;
4966ab5bb018SKeith M Wesolowski 
4967ab5bb018SKeith M Wesolowski 		case 5:	/* Monitor/Mwait parameters */
4968ab5bb018SKeith M Wesolowski 		{
4969ab5bb018SKeith M Wesolowski 			size_t mwait_size;
4970ab5bb018SKeith M Wesolowski 
4971ab5bb018SKeith M Wesolowski 			/*
4972ab5bb018SKeith M Wesolowski 			 * check cpi_mwait.support which was set in
4973ab5bb018SKeith M Wesolowski 			 * cpuid_pass_basic()
4974ab5bb018SKeith M Wesolowski 			 */
4975ab5bb018SKeith M Wesolowski 			if (!(cpi->cpi_mwait.support & MWAIT_SUPPORT))
4976ab5bb018SKeith M Wesolowski 				break;
4977ab5bb018SKeith M Wesolowski 
4978ab5bb018SKeith M Wesolowski 			/*
4979ab5bb018SKeith M Wesolowski 			 * Protect ourself from insane mwait line size.
4980ab5bb018SKeith M Wesolowski 			 * Workaround for incomplete hardware emulator(s).
4981ab5bb018SKeith M Wesolowski 			 */
4982ab5bb018SKeith M Wesolowski 			mwait_size = (size_t)MWAIT_SIZE_MAX(cpi);
4983ab5bb018SKeith M Wesolowski 			if (mwait_size < sizeof (uint32_t) ||
4984ab5bb018SKeith M Wesolowski 			    !ISP2(mwait_size)) {
4985ab5bb018SKeith M Wesolowski #if DEBUG
4986ab5bb018SKeith M Wesolowski 				cmn_err(CE_NOTE, "Cannot handle cpu %d mwait "
4987ab5bb018SKeith M Wesolowski 				    "size %ld", cpu->cpu_id, (long)mwait_size);
4988ab5bb018SKeith M Wesolowski #endif
4989ab5bb018SKeith M Wesolowski 				break;
4990ab5bb018SKeith M Wesolowski 			}
4991ab5bb018SKeith M Wesolowski 
4992ab5bb018SKeith M Wesolowski 			cpi->cpi_mwait.mon_min = (size_t)MWAIT_SIZE_MIN(cpi);
4993ab5bb018SKeith M Wesolowski 			cpi->cpi_mwait.mon_max = mwait_size;
4994ab5bb018SKeith M Wesolowski 			if (MWAIT_EXTENSION(cpi)) {
4995ab5bb018SKeith M Wesolowski 				cpi->cpi_mwait.support |= MWAIT_EXTENSIONS;
4996ab5bb018SKeith M Wesolowski 				if (MWAIT_INT_ENABLE(cpi))
4997ab5bb018SKeith M Wesolowski 					cpi->cpi_mwait.support |=
4998ab5bb018SKeith M Wesolowski 					    MWAIT_ECX_INT_ENABLE;
4999ab5bb018SKeith M Wesolowski 			}
5000ab5bb018SKeith M Wesolowski 			break;
5001ab5bb018SKeith M Wesolowski 		}
5002ab5bb018SKeith M Wesolowski 		default:
5003ab5bb018SKeith M Wesolowski 			break;
5004ab5bb018SKeith M Wesolowski 		}
5005ab5bb018SKeith M Wesolowski 	}
5006ab5bb018SKeith M Wesolowski 
5007ab5bb018SKeith M Wesolowski 	/*
5008ab5bb018SKeith M Wesolowski 	 * XSAVE enumeration
5009ab5bb018SKeith M Wesolowski 	 */
5010ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax >= 0xD) {
5011ab5bb018SKeith M Wesolowski 		struct cpuid_regs regs;
5012ab5bb018SKeith M Wesolowski 		boolean_t cpuid_d_valid = B_TRUE;
5013ab5bb018SKeith M Wesolowski 
5014ab5bb018SKeith M Wesolowski 		cp = &regs;
5015ab5bb018SKeith M Wesolowski 		cp->cp_eax = 0xD;
5016ab5bb018SKeith M Wesolowski 		cp->cp_edx = cp->cp_ebx = cp->cp_ecx = 0;
5017ab5bb018SKeith M Wesolowski 
5018ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(cp);
5019ab5bb018SKeith M Wesolowski 
5020ab5bb018SKeith M Wesolowski 		/*
5021ab5bb018SKeith M Wesolowski 		 * Sanity checks for debug
5022ab5bb018SKeith M Wesolowski 		 */
5023ab5bb018SKeith M Wesolowski 		if ((cp->cp_eax & XFEATURE_LEGACY_FP) == 0 ||
5024ab5bb018SKeith M Wesolowski 		    (cp->cp_eax & XFEATURE_SSE) == 0) {
5025ab5bb018SKeith M Wesolowski 			cpuid_d_valid = B_FALSE;
5026ab5bb018SKeith M Wesolowski 		}
5027ab5bb018SKeith M Wesolowski 
5028ab5bb018SKeith M Wesolowski 		cpi->cpi_xsave.xsav_hw_features_low = cp->cp_eax;
5029ab5bb018SKeith M Wesolowski 		cpi->cpi_xsave.xsav_hw_features_high = cp->cp_edx;
5030ab5bb018SKeith M Wesolowski 		cpi->cpi_xsave.xsav_max_size = cp->cp_ecx;
5031ab5bb018SKeith M Wesolowski 
5032ab5bb018SKeith M Wesolowski 		/*
5033ab5bb018SKeith M Wesolowski 		 * If the hw supports AVX, get the size and offset in the save
5034ab5bb018SKeith M Wesolowski 		 * area for the ymm state.
5035ab5bb018SKeith M Wesolowski 		 */
5036ab5bb018SKeith M Wesolowski 		if (cpi->cpi_xsave.xsav_hw_features_low & XFEATURE_AVX) {
5037ab5bb018SKeith M Wesolowski 			cp->cp_eax = 0xD;
5038ab5bb018SKeith M Wesolowski 			cp->cp_ecx = 2;
5039ab5bb018SKeith M Wesolowski 			cp->cp_edx = cp->cp_ebx = 0;
5040ab5bb018SKeith M Wesolowski 
5041ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(cp);
5042ab5bb018SKeith M Wesolowski 
5043ab5bb018SKeith M Wesolowski 			if (cp->cp_ebx != CPUID_LEAFD_2_YMM_OFFSET ||
5044ab5bb018SKeith M Wesolowski 			    cp->cp_eax != CPUID_LEAFD_2_YMM_SIZE) {
5045ab5bb018SKeith M Wesolowski 				cpuid_d_valid = B_FALSE;
5046ab5bb018SKeith M Wesolowski 			}
5047ab5bb018SKeith M Wesolowski 
5048ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.ymm_size = cp->cp_eax;
5049ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.ymm_offset = cp->cp_ebx;
5050ab5bb018SKeith M Wesolowski 		}
5051ab5bb018SKeith M Wesolowski 
5052ab5bb018SKeith M Wesolowski 		/*
5053ab5bb018SKeith M Wesolowski 		 * If the hw supports MPX, get the size and offset in the
5054ab5bb018SKeith M Wesolowski 		 * save area for BNDREGS and BNDCSR.
5055ab5bb018SKeith M Wesolowski 		 */
5056ab5bb018SKeith M Wesolowski 		if (cpi->cpi_xsave.xsav_hw_features_low & XFEATURE_MPX) {
5057ab5bb018SKeith M Wesolowski 			cp->cp_eax = 0xD;
5058ab5bb018SKeith M Wesolowski 			cp->cp_ecx = 3;
5059ab5bb018SKeith M Wesolowski 			cp->cp_edx = cp->cp_ebx = 0;
5060ab5bb018SKeith M Wesolowski 
5061ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(cp);
5062ab5bb018SKeith M Wesolowski 
5063ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.bndregs_size = cp->cp_eax;
5064ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.bndregs_offset = cp->cp_ebx;
5065ab5bb018SKeith M Wesolowski 
5066ab5bb018SKeith M Wesolowski 			cp->cp_eax = 0xD;
5067ab5bb018SKeith M Wesolowski 			cp->cp_ecx = 4;
5068ab5bb018SKeith M Wesolowski 			cp->cp_edx = cp->cp_ebx = 0;
5069ab5bb018SKeith M Wesolowski 
5070ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(cp);
5071ab5bb018SKeith M Wesolowski 
5072ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.bndcsr_size = cp->cp_eax;
5073ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.bndcsr_offset = cp->cp_ebx;
5074ab5bb018SKeith M Wesolowski 		}
5075ab5bb018SKeith M Wesolowski 
5076ab5bb018SKeith M Wesolowski 		/*
5077ab5bb018SKeith M Wesolowski 		 * If the hw supports AVX512, get the size and offset in the
5078ab5bb018SKeith M Wesolowski 		 * save area for the opmask registers and zmm state.
5079ab5bb018SKeith M Wesolowski 		 */
5080ab5bb018SKeith M Wesolowski 		if (cpi->cpi_xsave.xsav_hw_features_low & XFEATURE_AVX512) {
5081ab5bb018SKeith M Wesolowski 			cp->cp_eax = 0xD;
5082ab5bb018SKeith M Wesolowski 			cp->cp_ecx = 5;
5083ab5bb018SKeith M Wesolowski 			cp->cp_edx = cp->cp_ebx = 0;
5084ab5bb018SKeith M Wesolowski 
5085ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(cp);
5086ab5bb018SKeith M Wesolowski 
5087ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.opmask_size = cp->cp_eax;
5088ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.opmask_offset = cp->cp_ebx;
5089ab5bb018SKeith M Wesolowski 
5090ab5bb018SKeith M Wesolowski 			cp->cp_eax = 0xD;
5091ab5bb018SKeith M Wesolowski 			cp->cp_ecx = 6;
5092ab5bb018SKeith M Wesolowski 			cp->cp_edx = cp->cp_ebx = 0;
5093ab5bb018SKeith M Wesolowski 
5094ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(cp);
5095ab5bb018SKeith M Wesolowski 
5096ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.zmmlo_size = cp->cp_eax;
5097ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.zmmlo_offset = cp->cp_ebx;
5098ab5bb018SKeith M Wesolowski 
5099ab5bb018SKeith M Wesolowski 			cp->cp_eax = 0xD;
5100ab5bb018SKeith M Wesolowski 			cp->cp_ecx = 7;
5101ab5bb018SKeith M Wesolowski 			cp->cp_edx = cp->cp_ebx = 0;
5102ab5bb018SKeith M Wesolowski 
5103ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(cp);
5104ab5bb018SKeith M Wesolowski 
5105ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.zmmhi_size = cp->cp_eax;
5106ab5bb018SKeith M Wesolowski 			cpi->cpi_xsave.zmmhi_offset = cp->cp_ebx;
5107ab5bb018SKeith M Wesolowski 		}
5108ab5bb018SKeith M Wesolowski 
5109ab5bb018SKeith M Wesolowski 		if (is_x86_feature(x86_featureset, X86FSET_XSAVE)) {
5110ab5bb018SKeith M Wesolowski 			xsave_state_size = 0;
5111ab5bb018SKeith M Wesolowski 		} else if (cpuid_d_valid) {
5112ab5bb018SKeith M Wesolowski 			xsave_state_size = cpi->cpi_xsave.xsav_max_size;
5113ab5bb018SKeith M Wesolowski 		} else {
5114ab5bb018SKeith M Wesolowski 			/* Broken CPUID 0xD, probably in HVM */
5115ab5bb018SKeith M Wesolowski 			cmn_err(CE_WARN, "cpu%d: CPUID.0xD returns invalid "
5116ab5bb018SKeith M Wesolowski 			    "value: hw_low = %d, hw_high = %d, xsave_size = %d"
5117ab5bb018SKeith M Wesolowski 			    ", ymm_size = %d, ymm_offset = %d\n",
5118ab5bb018SKeith M Wesolowski 			    cpu->cpu_id, cpi->cpi_xsave.xsav_hw_features_low,
5119ab5bb018SKeith M Wesolowski 			    cpi->cpi_xsave.xsav_hw_features_high,
5120ab5bb018SKeith M Wesolowski 			    (int)cpi->cpi_xsave.xsav_max_size,
5121ab5bb018SKeith M Wesolowski 			    (int)cpi->cpi_xsave.ymm_size,
5122ab5bb018SKeith M Wesolowski 			    (int)cpi->cpi_xsave.ymm_offset);
5123ab5bb018SKeith M Wesolowski 
5124ab5bb018SKeith M Wesolowski 			if (xsave_state_size != 0) {
5125ab5bb018SKeith M Wesolowski 				/*
5126ab5bb018SKeith M Wesolowski 				 * This must be a non-boot CPU. We cannot
5127ab5bb018SKeith M Wesolowski 				 * continue, because boot cpu has already
5128ab5bb018SKeith M Wesolowski 				 * enabled XSAVE.
5129ab5bb018SKeith M Wesolowski 				 */
5130ab5bb018SKeith M Wesolowski 				ASSERT(cpu->cpu_id != 0);
5131ab5bb018SKeith M Wesolowski 				cmn_err(CE_PANIC, "cpu%d: we have already "
5132ab5bb018SKeith M Wesolowski 				    "enabled XSAVE on boot cpu, cannot "
5133ab5bb018SKeith M Wesolowski 				    "continue.", cpu->cpu_id);
5134ab5bb018SKeith M Wesolowski 			} else {
5135ab5bb018SKeith M Wesolowski 				/*
5136ab5bb018SKeith M Wesolowski 				 * If we reached here on the boot CPU, it's also
5137ab5bb018SKeith M Wesolowski 				 * almost certain that we'll reach here on the
5138ab5bb018SKeith M Wesolowski 				 * non-boot CPUs. When we're here on a boot CPU
5139ab5bb018SKeith M Wesolowski 				 * we should disable the feature, on a non-boot
5140ab5bb018SKeith M Wesolowski 				 * CPU we need to confirm that we have.
5141ab5bb018SKeith M Wesolowski 				 */
5142ab5bb018SKeith M Wesolowski 				if (cpu->cpu_id == 0) {
5143ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5144ab5bb018SKeith M Wesolowski 					    X86FSET_XSAVE);
5145ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5146ab5bb018SKeith M Wesolowski 					    X86FSET_AVX);
5147ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5148ab5bb018SKeith M Wesolowski 					    X86FSET_F16C);
5149ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5150ab5bb018SKeith M Wesolowski 					    X86FSET_BMI1);
5151ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5152ab5bb018SKeith M Wesolowski 					    X86FSET_BMI2);
5153ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5154ab5bb018SKeith M Wesolowski 					    X86FSET_FMA);
5155ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5156ab5bb018SKeith M Wesolowski 					    X86FSET_AVX2);
5157ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5158ab5bb018SKeith M Wesolowski 					    X86FSET_MPX);
5159ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5160ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512F);
5161ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5162ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512DQ);
5163ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5164ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512PF);
5165ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5166ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512ER);
5167ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5168ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512CD);
5169ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5170ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512BW);
5171ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5172ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512VL);
5173ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5174ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512FMA);
5175ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5176ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512VBMI);
5177ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5178ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512VNNI);
5179ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5180ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512VPOPCDQ);
5181ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5182ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512NNIW);
5183ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5184ab5bb018SKeith M Wesolowski 					    X86FSET_AVX512FMAPS);
5185ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5186ab5bb018SKeith M Wesolowski 					    X86FSET_VAES);
5187ab5bb018SKeith M Wesolowski 					remove_x86_feature(x86_featureset,
5188ab5bb018SKeith M Wesolowski 					    X86FSET_VPCLMULQDQ);
518956726c7eSRobert Mustacchi 					remove_x86_feature(x86_featureset,
519056726c7eSRobert Mustacchi 					    X86FSET_GFNI);
519156726c7eSRobert Mustacchi 					remove_x86_feature(x86_featureset,
519256726c7eSRobert Mustacchi 					    X86FSET_AVX512_VP2INT);
519356726c7eSRobert Mustacchi 					remove_x86_feature(x86_featureset,
519456726c7eSRobert Mustacchi 					    X86FSET_AVX512_BITALG);
519556726c7eSRobert Mustacchi 					remove_x86_feature(x86_featureset,
519656726c7eSRobert Mustacchi 					    X86FSET_AVX512_VBMI2);
519756726c7eSRobert Mustacchi 					remove_x86_feature(x86_featureset,
519856726c7eSRobert Mustacchi 					    X86FSET_AVX512_BF16);
5199ab5bb018SKeith M Wesolowski 
5200ab5bb018SKeith M Wesolowski 					xsave_force_disable = B_TRUE;
5201ab5bb018SKeith M Wesolowski 				} else {
5202ab5bb018SKeith M Wesolowski 					VERIFY(is_x86_feature(x86_featureset,
5203ab5bb018SKeith M Wesolowski 					    X86FSET_XSAVE) == B_FALSE);
5204ab5bb018SKeith M Wesolowski 				}
5205ab5bb018SKeith M Wesolowski 			}
5206ab5bb018SKeith M Wesolowski 		}
5207ab5bb018SKeith M Wesolowski 	}
5208ab5bb018SKeith M Wesolowski 
5209ab5bb018SKeith M Wesolowski 
5210ab5bb018SKeith M Wesolowski 	if ((cpi->cpi_xmaxeax & CPUID_LEAF_EXT_0) == 0)
5211ab5bb018SKeith M Wesolowski 		return;
5212ab5bb018SKeith M Wesolowski 
5213ab5bb018SKeith M Wesolowski 	if ((nmax = cpi->cpi_xmaxeax - CPUID_LEAF_EXT_0 + 1) > NMAX_CPI_EXTD)
5214ab5bb018SKeith M Wesolowski 		nmax = NMAX_CPI_EXTD;
5215ab5bb018SKeith M Wesolowski 	/*
5216651a12cbSRobert Mustacchi 	 * Copy the extended properties, fixing them as we go. While we start at
5217651a12cbSRobert Mustacchi 	 * 2 because we've already handled a few cases in the basic pass, the
5218651a12cbSRobert Mustacchi 	 * rest we let ourselves just grab again (e.g. 0x8, 0x21).
5219ab5bb018SKeith M Wesolowski 	 */
5220ab5bb018SKeith M Wesolowski 	iptr = (void *)cpi->cpi_brandstr;
5221ab5bb018SKeith M Wesolowski 	for (n = 2, cp = &cpi->cpi_extd[2]; n < nmax; cp++, n++) {
5222ab5bb018SKeith M Wesolowski 		cp->cp_eax = CPUID_LEAF_EXT_0 + n;
5223ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(cp);
5224ab5bb018SKeith M Wesolowski 		platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_0 + n,
5225ab5bb018SKeith M Wesolowski 		    cp);
5226ab5bb018SKeith M Wesolowski 		switch (n) {
5227ab5bb018SKeith M Wesolowski 		case 2:
5228ab5bb018SKeith M Wesolowski 		case 3:
5229ab5bb018SKeith M Wesolowski 		case 4:
5230ab5bb018SKeith M Wesolowski 			/*
5231ab5bb018SKeith M Wesolowski 			 * Extract the brand string
5232ab5bb018SKeith M Wesolowski 			 */
5233ab5bb018SKeith M Wesolowski 			*iptr++ = cp->cp_eax;
5234ab5bb018SKeith M Wesolowski 			*iptr++ = cp->cp_ebx;
5235ab5bb018SKeith M Wesolowski 			*iptr++ = cp->cp_ecx;
5236ab5bb018SKeith M Wesolowski 			*iptr++ = cp->cp_edx;
5237ab5bb018SKeith M Wesolowski 			break;
5238ab5bb018SKeith M Wesolowski 		case 5:
5239ab5bb018SKeith M Wesolowski 			switch (cpi->cpi_vendor) {
5240ab5bb018SKeith M Wesolowski 			case X86_VENDOR_AMD:
5241ab5bb018SKeith M Wesolowski 				/*
5242ab5bb018SKeith M Wesolowski 				 * The Athlon and Duron were the first
5243ab5bb018SKeith M Wesolowski 				 * parts to report the sizes of the
5244ab5bb018SKeith M Wesolowski 				 * TLB for large pages. Before then,
5245ab5bb018SKeith M Wesolowski 				 * we don't trust the data.
5246ab5bb018SKeith M Wesolowski 				 */
5247ab5bb018SKeith M Wesolowski 				if (cpi->cpi_family < 6 ||
5248ab5bb018SKeith M Wesolowski 				    (cpi->cpi_family == 6 &&
5249ab5bb018SKeith M Wesolowski 				    cpi->cpi_model < 1))
5250ab5bb018SKeith M Wesolowski 					cp->cp_eax = 0;
5251ab5bb018SKeith M Wesolowski 				break;
5252ab5bb018SKeith M Wesolowski 			default:
5253ab5bb018SKeith M Wesolowski 				break;
5254ab5bb018SKeith M Wesolowski 			}
5255ab5bb018SKeith M Wesolowski 			break;
5256ab5bb018SKeith M Wesolowski 		case 6:
5257ab5bb018SKeith M Wesolowski 			switch (cpi->cpi_vendor) {
5258ab5bb018SKeith M Wesolowski 			case X86_VENDOR_AMD:
5259ab5bb018SKeith M Wesolowski 				/*
5260ab5bb018SKeith M Wesolowski 				 * The Athlon and Duron were the first
5261ab5bb018SKeith M Wesolowski 				 * AMD parts with L2 TLB's.
5262ab5bb018SKeith M Wesolowski 				 * Before then, don't trust the data.
5263ab5bb018SKeith M Wesolowski 				 */
5264ab5bb018SKeith M Wesolowski 				if (cpi->cpi_family < 6 ||
5265ab5bb018SKeith M Wesolowski 				    (cpi->cpi_family == 6 &&
5266ab5bb018SKeith M Wesolowski 				    cpi->cpi_model < 1))
5267ab5bb018SKeith M Wesolowski 					cp->cp_eax = cp->cp_ebx = 0;
5268ab5bb018SKeith M Wesolowski 				/*
5269ab5bb018SKeith M Wesolowski 				 * AMD Duron rev A0 reports L2
5270ab5bb018SKeith M Wesolowski 				 * cache size incorrectly as 1K
5271ab5bb018SKeith M Wesolowski 				 * when it is really 64K
5272ab5bb018SKeith M Wesolowski 				 */
5273ab5bb018SKeith M Wesolowski 				if (cpi->cpi_family == 6 &&
5274ab5bb018SKeith M Wesolowski 				    cpi->cpi_model == 3 &&
5275ab5bb018SKeith M Wesolowski 				    cpi->cpi_step == 0) {
5276ab5bb018SKeith M Wesolowski 					cp->cp_ecx &= 0xffff;
5277ab5bb018SKeith M Wesolowski 					cp->cp_ecx |= 0x400000;
5278ab5bb018SKeith M Wesolowski 				}
5279ab5bb018SKeith M Wesolowski 				break;
5280ab5bb018SKeith M Wesolowski 			case X86_VENDOR_Cyrix:	/* VIA C3 */
5281ab5bb018SKeith M Wesolowski 				/*
5282ab5bb018SKeith M Wesolowski 				 * VIA C3 processors are a bit messed
5283ab5bb018SKeith M Wesolowski 				 * up w.r.t. encoding cache sizes in %ecx
5284ab5bb018SKeith M Wesolowski 				 */
5285ab5bb018SKeith M Wesolowski 				if (cpi->cpi_family != 6)
5286ab5bb018SKeith M Wesolowski 					break;
5287ab5bb018SKeith M Wesolowski 				/*
5288ab5bb018SKeith M Wesolowski 				 * model 7 and 8 were incorrectly encoded
5289ab5bb018SKeith M Wesolowski 				 *
5290ab5bb018SKeith M Wesolowski 				 * xxx is model 8 really broken?
5291ab5bb018SKeith M Wesolowski 				 */
5292ab5bb018SKeith M Wesolowski 				if (cpi->cpi_model == 7 ||
5293ab5bb018SKeith M Wesolowski 				    cpi->cpi_model == 8)
5294ab5bb018SKeith M Wesolowski 					cp->cp_ecx =
5295ab5bb018SKeith M Wesolowski 					    BITX(cp->cp_ecx, 31, 24) << 16 |
5296ab5bb018SKeith M Wesolowski 					    BITX(cp->cp_ecx, 23, 16) << 12 |
5297ab5bb018SKeith M Wesolowski 					    BITX(cp->cp_ecx, 15, 8) << 8 |
5298ab5bb018SKeith M Wesolowski 					    BITX(cp->cp_ecx, 7, 0);
5299ab5bb018SKeith M Wesolowski 				/*
5300ab5bb018SKeith M Wesolowski 				 * model 9 stepping 1 has wrong associativity
5301ab5bb018SKeith M Wesolowski 				 */
5302ab5bb018SKeith M Wesolowski 				if (cpi->cpi_model == 9 && cpi->cpi_step == 1)
5303ab5bb018SKeith M Wesolowski 					cp->cp_ecx |= 8 << 12;
5304ab5bb018SKeith M Wesolowski 				break;
5305ab5bb018SKeith M Wesolowski 			case X86_VENDOR_Intel:
5306ab5bb018SKeith M Wesolowski 				/*
5307ab5bb018SKeith M Wesolowski 				 * Extended L2 Cache features function.
5308ab5bb018SKeith M Wesolowski 				 * First appeared on Prescott.
5309ab5bb018SKeith M Wesolowski 				 */
5310ab5bb018SKeith M Wesolowski 			default:
5311ab5bb018SKeith M Wesolowski 				break;
5312ab5bb018SKeith M Wesolowski 			}
5313ab5bb018SKeith M Wesolowski 			break;
5314ab5bb018SKeith M Wesolowski 		default:
5315ab5bb018SKeith M Wesolowski 			break;
5316ab5bb018SKeith M Wesolowski 		}
5317ab5bb018SKeith M Wesolowski 	}
5318ab5bb018SKeith M Wesolowski }
5319ab5bb018SKeith M Wesolowski 
5320ab5bb018SKeith M Wesolowski static const char *
intel_cpubrand(const struct cpuid_info * cpi)5321ab5bb018SKeith M Wesolowski intel_cpubrand(const struct cpuid_info *cpi)
5322ab5bb018SKeith M Wesolowski {
5323ab5bb018SKeith M Wesolowski 	int i;
5324ab5bb018SKeith M Wesolowski 
5325ab5bb018SKeith M Wesolowski 	ASSERT(is_x86_feature(x86_featureset, X86FSET_CPUID));
5326ab5bb018SKeith M Wesolowski 
5327ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_family) {
5328ab5bb018SKeith M Wesolowski 	case 5:
5329ab5bb018SKeith M Wesolowski 		return ("Intel Pentium(r)");
5330ab5bb018SKeith M Wesolowski 	case 6:
5331ab5bb018SKeith M Wesolowski 		switch (cpi->cpi_model) {
5332ab5bb018SKeith M Wesolowski 			uint_t celeron, xeon;
5333ab5bb018SKeith M Wesolowski 			const struct cpuid_regs *cp;
5334ab5bb018SKeith M Wesolowski 		case 0:
5335ab5bb018SKeith M Wesolowski 		case 1:
5336ab5bb018SKeith M Wesolowski 		case 2:
5337ab5bb018SKeith M Wesolowski 			return ("Intel Pentium(r) Pro");
5338ab5bb018SKeith M Wesolowski 		case 3:
5339ab5bb018SKeith M Wesolowski 		case 4:
5340ab5bb018SKeith M Wesolowski 			return ("Intel Pentium(r) II");
5341ab5bb018SKeith M Wesolowski 		case 6:
5342ab5bb018SKeith M Wesolowski 			return ("Intel Celeron(r)");
5343ab5bb018SKeith M Wesolowski 		case 5:
5344ab5bb018SKeith M Wesolowski 		case 7:
5345ab5bb018SKeith M Wesolowski 			celeron = xeon = 0;
5346ab5bb018SKeith M Wesolowski 			cp = &cpi->cpi_std[2];	/* cache info */
5347ab5bb018SKeith M Wesolowski 
5348ab5bb018SKeith M Wesolowski 			for (i = 1; i < 4; i++) {
5349ab5bb018SKeith M Wesolowski 				uint_t tmp;
5350ab5bb018SKeith M Wesolowski 
5351ab5bb018SKeith M Wesolowski 				tmp = (cp->cp_eax >> (8 * i)) & 0xff;
5352ab5bb018SKeith M Wesolowski 				if (tmp == 0x40)
5353ab5bb018SKeith M Wesolowski 					celeron++;
5354ab5bb018SKeith M Wesolowski 				if (tmp >= 0x44 && tmp <= 0x45)
5355ab5bb018SKeith M Wesolowski 					xeon++;
5356ab5bb018SKeith M Wesolowski 			}
5357ab5bb018SKeith M Wesolowski 
5358ab5bb018SKeith M Wesolowski 			for (i = 0; i < 2; i++) {
5359ab5bb018SKeith M Wesolowski 				uint_t tmp;
5360ab5bb018SKeith M Wesolowski 
5361ab5bb018SKeith M Wesolowski 				tmp = (cp->cp_ebx >> (8 * i)) & 0xff;
5362ab5bb018SKeith M Wesolowski 				if (tmp == 0x40)
5363ab5bb018SKeith M Wesolowski 					celeron++;
5364ab5bb018SKeith M Wesolowski 				else if (tmp >= 0x44 && tmp <= 0x45)
5365ab5bb018SKeith M Wesolowski 					xeon++;
5366ab5bb018SKeith M Wesolowski 			}
5367ab5bb018SKeith M Wesolowski 
5368ab5bb018SKeith M Wesolowski 			for (i = 0; i < 4; i++) {
5369ab5bb018SKeith M Wesolowski 				uint_t tmp;
5370ab5bb018SKeith M Wesolowski 
5371ab5bb018SKeith M Wesolowski 				tmp = (cp->cp_ecx >> (8 * i)) & 0xff;
5372ab5bb018SKeith M Wesolowski 				if (tmp == 0x40)
5373ab5bb018SKeith M Wesolowski 					celeron++;
5374ab5bb018SKeith M Wesolowski 				else if (tmp >= 0x44 && tmp <= 0x45)
5375ab5bb018SKeith M Wesolowski 					xeon++;
5376ab5bb018SKeith M Wesolowski 			}
5377ab5bb018SKeith M Wesolowski 
5378ab5bb018SKeith M Wesolowski 			for (i = 0; i < 4; i++) {
5379ab5bb018SKeith M Wesolowski 				uint_t tmp;
5380ab5bb018SKeith M Wesolowski 
5381ab5bb018SKeith M Wesolowski 				tmp = (cp->cp_edx >> (8 * i)) & 0xff;
5382ab5bb018SKeith M Wesolowski 				if (tmp == 0x40)
5383ab5bb018SKeith M Wesolowski 					celeron++;
5384ab5bb018SKeith M Wesolowski 				else if (tmp >= 0x44 && tmp <= 0x45)
5385ab5bb018SKeith M Wesolowski 					xeon++;
5386ab5bb018SKeith M Wesolowski 			}
5387ab5bb018SKeith M Wesolowski 
5388ab5bb018SKeith M Wesolowski 			if (celeron)
5389ab5bb018SKeith M Wesolowski 				return ("Intel Celeron(r)");
5390ab5bb018SKeith M Wesolowski 			if (xeon)
5391ab5bb018SKeith M Wesolowski 				return (cpi->cpi_model == 5 ?
5392ab5bb018SKeith M Wesolowski 				    "Intel Pentium(r) II Xeon(tm)" :
5393ab5bb018SKeith M Wesolowski 				    "Intel Pentium(r) III Xeon(tm)");
5394ab5bb018SKeith M Wesolowski 			return (cpi->cpi_model == 5 ?
5395ab5bb018SKeith M Wesolowski 			    "Intel Pentium(r) II or Pentium(r) II Xeon(tm)" :
5396ab5bb018SKeith M Wesolowski 			    "Intel Pentium(r) III or Pentium(r) III Xeon(tm)");
5397ab5bb018SKeith M Wesolowski 		default:
5398ab5bb018SKeith M Wesolowski 			break;
5399ab5bb018SKeith M Wesolowski 		}
5400ab5bb018SKeith M Wesolowski 	default:
5401ab5bb018SKeith M Wesolowski 		break;
5402ab5bb018SKeith M Wesolowski 	}
5403ab5bb018SKeith M Wesolowski 
5404ab5bb018SKeith M Wesolowski 	/* BrandID is present if the field is nonzero */
5405ab5bb018SKeith M Wesolowski 	if (cpi->cpi_brandid != 0) {
5406ab5bb018SKeith M Wesolowski 		static const struct {
5407ab5bb018SKeith M Wesolowski 			uint_t bt_bid;
5408ab5bb018SKeith M Wesolowski 			const char *bt_str;
5409ab5bb018SKeith M Wesolowski 		} brand_tbl[] = {
5410ab5bb018SKeith M Wesolowski 			{ 0x1,	"Intel(r) Celeron(r)" },
5411ab5bb018SKeith M Wesolowski 			{ 0x2,	"Intel(r) Pentium(r) III" },
5412ab5bb018SKeith M Wesolowski 			{ 0x3,	"Intel(r) Pentium(r) III Xeon(tm)" },
5413ab5bb018SKeith M Wesolowski 			{ 0x4,	"Intel(r) Pentium(r) III" },
5414ab5bb018SKeith M Wesolowski 			{ 0x6,	"Mobile Intel(r) Pentium(r) III" },
5415ab5bb018SKeith M Wesolowski 			{ 0x7,	"Mobile Intel(r) Celeron(r)" },
5416ab5bb018SKeith M Wesolowski 			{ 0x8,	"Intel(r) Pentium(r) 4" },
5417ab5bb018SKeith M Wesolowski 			{ 0x9,	"Intel(r) Pentium(r) 4" },
5418ab5bb018SKeith M Wesolowski 			{ 0xa,	"Intel(r) Celeron(r)" },
5419ab5bb018SKeith M Wesolowski 			{ 0xb,	"Intel(r) Xeon(tm)" },
5420ab5bb018SKeith M Wesolowski 			{ 0xc,	"Intel(r) Xeon(tm) MP" },
5421ab5bb018SKeith M Wesolowski 			{ 0xe,	"Mobile Intel(r) Pentium(r) 4" },
5422ab5bb018SKeith M Wesolowski 			{ 0xf,	"Mobile Intel(r) Celeron(r)" },
5423ab5bb018SKeith M Wesolowski 			{ 0x11, "Mobile Genuine Intel(r)" },
5424ab5bb018SKeith M Wesolowski 			{ 0x12, "Intel(r) Celeron(r) M" },
5425ab5bb018SKeith M Wesolowski 			{ 0x13, "Mobile Intel(r) Celeron(r)" },
5426ab5bb018SKeith M Wesolowski 			{ 0x14, "Intel(r) Celeron(r)" },
5427ab5bb018SKeith M Wesolowski 			{ 0x15, "Mobile Genuine Intel(r)" },
5428ab5bb018SKeith M Wesolowski 			{ 0x16,	"Intel(r) Pentium(r) M" },
5429ab5bb018SKeith M Wesolowski 			{ 0x17, "Mobile Intel(r) Celeron(r)" }
5430ab5bb018SKeith M Wesolowski 		};
5431ab5bb018SKeith M Wesolowski 		uint_t btblmax = sizeof (brand_tbl) / sizeof (brand_tbl[0]);
5432ab5bb018SKeith M Wesolowski 		uint_t sgn;
5433ab5bb018SKeith M Wesolowski 
5434ab5bb018SKeith M Wesolowski 		sgn = (cpi->cpi_family << 8) |
5435ab5bb018SKeith M Wesolowski 		    (cpi->cpi_model << 4) | cpi->cpi_step;
5436ab5bb018SKeith M Wesolowski 
5437ab5bb018SKeith M Wesolowski 		for (i = 0; i < btblmax; i++)
5438ab5bb018SKeith M Wesolowski 			if (brand_tbl[i].bt_bid == cpi->cpi_brandid)
5439ab5bb018SKeith M Wesolowski 				break;
5440ab5bb018SKeith M Wesolowski 		if (i < btblmax) {
5441ab5bb018SKeith M Wesolowski 			if (sgn == 0x6b1 && cpi->cpi_brandid == 3)
5442ab5bb018SKeith M Wesolowski 				return ("Intel(r) Celeron(r)");
5443ab5bb018SKeith M Wesolowski 			if (sgn < 0xf13 && cpi->cpi_brandid == 0xb)
5444ab5bb018SKeith M Wesolowski 				return ("Intel(r) Xeon(tm) MP");
5445ab5bb018SKeith M Wesolowski 			if (sgn < 0xf13 && cpi->cpi_brandid == 0xe)
5446ab5bb018SKeith M Wesolowski 				return ("Intel(r) Xeon(tm)");
5447ab5bb018SKeith M Wesolowski 			return (brand_tbl[i].bt_str);
5448ab5bb018SKeith M Wesolowski 		}
5449ab5bb018SKeith M Wesolowski 	}
5450ab5bb018SKeith M Wesolowski 
5451ab5bb018SKeith M Wesolowski 	return (NULL);
5452ab5bb018SKeith M Wesolowski }
5453ab5bb018SKeith M Wesolowski 
5454ab5bb018SKeith M Wesolowski static const char *
amd_cpubrand(const struct cpuid_info * cpi)5455ab5bb018SKeith M Wesolowski amd_cpubrand(const struct cpuid_info *cpi)
5456ab5bb018SKeith M Wesolowski {
5457ab5bb018SKeith M Wesolowski 	ASSERT(is_x86_feature(x86_featureset, X86FSET_CPUID));
5458ab5bb018SKeith M Wesolowski 
5459ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_family) {
5460ab5bb018SKeith M Wesolowski 	case 5:
5461ab5bb018SKeith M Wesolowski 		switch (cpi->cpi_model) {
5462ab5bb018SKeith M Wesolowski 		case 0:
5463ab5bb018SKeith M Wesolowski 		case 1:
5464ab5bb018SKeith M Wesolowski 		case 2:
5465ab5bb018SKeith M Wesolowski 		case 3:
5466ab5bb018SKeith M Wesolowski 		case 4:
5467ab5bb018SKeith M Wesolowski 		case 5:
5468ab5bb018SKeith M Wesolowski 			return ("AMD-K5(r)");
5469ab5bb018SKeith M Wesolowski 		case 6:
5470ab5bb018SKeith M Wesolowski 		case 7:
5471ab5bb018SKeith M Wesolowski 			return ("AMD-K6(r)");
5472ab5bb018SKeith M Wesolowski 		case 8:
5473ab5bb018SKeith M Wesolowski 			return ("AMD-K6(r)-2");
5474ab5bb018SKeith M Wesolowski 		case 9:
5475ab5bb018SKeith M Wesolowski 			return ("AMD-K6(r)-III");
5476ab5bb018SKeith M Wesolowski 		default:
5477ab5bb018SKeith M Wesolowski 			return ("AMD (family 5)");
5478ab5bb018SKeith M Wesolowski 		}
5479ab5bb018SKeith M Wesolowski 	case 6:
5480ab5bb018SKeith M Wesolowski 		switch (cpi->cpi_model) {
5481ab5bb018SKeith M Wesolowski 		case 1:
5482ab5bb018SKeith M Wesolowski 			return ("AMD-K7(tm)");
5483ab5bb018SKeith M Wesolowski 		case 0:
5484ab5bb018SKeith M Wesolowski 		case 2:
5485ab5bb018SKeith M Wesolowski 		case 4:
5486ab5bb018SKeith M Wesolowski 			return ("AMD Athlon(tm)");
5487ab5bb018SKeith M Wesolowski 		case 3:
5488ab5bb018SKeith M Wesolowski 		case 7:
5489ab5bb018SKeith M Wesolowski 			return ("AMD Duron(tm)");
5490ab5bb018SKeith M Wesolowski 		case 6:
5491ab5bb018SKeith M Wesolowski 		case 8:
5492ab5bb018SKeith M Wesolowski 		case 10:
5493ab5bb018SKeith M Wesolowski 			/*
5494ab5bb018SKeith M Wesolowski 			 * Use the L2 cache size to distinguish
5495ab5bb018SKeith M Wesolowski 			 */
5496ab5bb018SKeith M Wesolowski 			return ((cpi->cpi_extd[6].cp_ecx >> 16) >= 256 ?
5497ab5bb018SKeith M Wesolowski 			    "AMD Athlon(tm)" : "AMD Duron(tm)");
5498ab5bb018SKeith M Wesolowski 		default:
5499ab5bb018SKeith M Wesolowski 			return ("AMD (family 6)");
5500ab5bb018SKeith M Wesolowski 		}
5501ab5bb018SKeith M Wesolowski 	default:
5502ab5bb018SKeith M Wesolowski 		break;
5503ab5bb018SKeith M Wesolowski 	}
5504ab5bb018SKeith M Wesolowski 
5505ab5bb018SKeith M Wesolowski 	if (cpi->cpi_family == 0xf && cpi->cpi_model == 5 &&
5506ab5bb018SKeith M Wesolowski 	    cpi->cpi_brandid != 0) {
5507ab5bb018SKeith M Wesolowski 		switch (BITX(cpi->cpi_brandid, 7, 5)) {
5508ab5bb018SKeith M Wesolowski 		case 3:
5509ab5bb018SKeith M Wesolowski 			return ("AMD Opteron(tm) UP 1xx");
5510ab5bb018SKeith M Wesolowski 		case 4:
5511ab5bb018SKeith M Wesolowski 			return ("AMD Opteron(tm) DP 2xx");
5512ab5bb018SKeith M Wesolowski 		case 5:
5513ab5bb018SKeith M Wesolowski 			return ("AMD Opteron(tm) MP 8xx");
5514ab5bb018SKeith M Wesolowski 		default:
5515ab5bb018SKeith M Wesolowski 			return ("AMD Opteron(tm)");
5516ab5bb018SKeith M Wesolowski 		}
5517ab5bb018SKeith M Wesolowski 	}
5518ab5bb018SKeith M Wesolowski 
5519ab5bb018SKeith M Wesolowski 	return (NULL);
5520ab5bb018SKeith M Wesolowski }
5521ab5bb018SKeith M Wesolowski 
5522ab5bb018SKeith M Wesolowski static const char *
cyrix_cpubrand(struct cpuid_info * cpi,uint_t type)5523ab5bb018SKeith M Wesolowski cyrix_cpubrand(struct cpuid_info *cpi, uint_t type)
5524ab5bb018SKeith M Wesolowski {
5525ab5bb018SKeith M Wesolowski 	ASSERT(is_x86_feature(x86_featureset, X86FSET_CPUID));
5526ab5bb018SKeith M Wesolowski 
5527ab5bb018SKeith M Wesolowski 	switch (type) {
5528ab5bb018SKeith M Wesolowski 	case X86_TYPE_CYRIX_6x86:
5529ab5bb018SKeith M Wesolowski 		return ("Cyrix 6x86");
5530ab5bb018SKeith M Wesolowski 	case X86_TYPE_CYRIX_6x86L:
5531ab5bb018SKeith M Wesolowski 		return ("Cyrix 6x86L");
5532ab5bb018SKeith M Wesolowski 	case X86_TYPE_CYRIX_6x86MX:
5533ab5bb018SKeith M Wesolowski 		return ("Cyrix 6x86MX");
5534ab5bb018SKeith M Wesolowski 	case X86_TYPE_CYRIX_GXm:
5535ab5bb018SKeith M Wesolowski 		return ("Cyrix GXm");
5536ab5bb018SKeith M Wesolowski 	case X86_TYPE_CYRIX_MediaGX:
5537ab5bb018SKeith M Wesolowski 		return ("Cyrix MediaGX");
5538ab5bb018SKeith M Wesolowski 	case X86_TYPE_CYRIX_MII:
5539ab5bb018SKeith M Wesolowski 		return ("Cyrix M2");
5540ab5bb018SKeith M Wesolowski 	case X86_TYPE_VIA_CYRIX_III:
5541ab5bb018SKeith M Wesolowski 		return ("VIA Cyrix M3");
5542ab5bb018SKeith M Wesolowski 	default:
5543ab5bb018SKeith M Wesolowski 		/*
5544ab5bb018SKeith M Wesolowski 		 * Have another wild guess ..
5545ab5bb018SKeith M Wesolowski 		 */
5546ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 4 && cpi->cpi_model == 9)
5547ab5bb018SKeith M Wesolowski 			return ("Cyrix 5x86");
5548ab5bb018SKeith M Wesolowski 		else if (cpi->cpi_family == 5) {
5549ab5bb018SKeith M Wesolowski 			switch (cpi->cpi_model) {
5550ab5bb018SKeith M Wesolowski 			case 2:
5551ab5bb018SKeith M Wesolowski 				return ("Cyrix 6x86");	/* Cyrix M1 */
5552ab5bb018SKeith M Wesolowski 			case 4:
5553ab5bb018SKeith M Wesolowski 				return ("Cyrix MediaGX");
5554ab5bb018SKeith M Wesolowski 			default:
5555ab5bb018SKeith M Wesolowski 				break;
5556ab5bb018SKeith M Wesolowski 			}
5557ab5bb018SKeith M Wesolowski 		} else if (cpi->cpi_family == 6) {
5558ab5bb018SKeith M Wesolowski 			switch (cpi->cpi_model) {
5559ab5bb018SKeith M Wesolowski 			case 0:
5560ab5bb018SKeith M Wesolowski 				return ("Cyrix 6x86MX"); /* Cyrix M2? */
5561ab5bb018SKeith M Wesolowski 			case 5:
5562ab5bb018SKeith M Wesolowski 			case 6:
5563ab5bb018SKeith M Wesolowski 			case 7:
5564ab5bb018SKeith M Wesolowski 			case 8:
5565ab5bb018SKeith M Wesolowski 			case 9:
5566ab5bb018SKeith M Wesolowski 				return ("VIA C3");
5567ab5bb018SKeith M Wesolowski 			default:
5568ab5bb018SKeith M Wesolowski 				break;
5569ab5bb018SKeith M Wesolowski 			}
5570ab5bb018SKeith M Wesolowski 		}
5571ab5bb018SKeith M Wesolowski 		break;
5572ab5bb018SKeith M Wesolowski 	}
5573ab5bb018SKeith M Wesolowski 	return (NULL);
5574ab5bb018SKeith M Wesolowski }
5575ab5bb018SKeith M Wesolowski 
5576ab5bb018SKeith M Wesolowski /*
5577ab5bb018SKeith M Wesolowski  * This only gets called in the case that the CPU extended
5578ab5bb018SKeith M Wesolowski  * feature brand string (0x80000002, 0x80000003, 0x80000004)
5579ab5bb018SKeith M Wesolowski  * aren't available, or contain null bytes for some reason.
5580ab5bb018SKeith M Wesolowski  */
5581ab5bb018SKeith M Wesolowski static void
fabricate_brandstr(struct cpuid_info * cpi)5582ab5bb018SKeith M Wesolowski fabricate_brandstr(struct cpuid_info *cpi)
5583ab5bb018SKeith M Wesolowski {
5584ab5bb018SKeith M Wesolowski 	const char *brand = NULL;
5585ab5bb018SKeith M Wesolowski 
5586ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
5587ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
5588ab5bb018SKeith M Wesolowski 		brand = intel_cpubrand(cpi);
5589ab5bb018SKeith M Wesolowski 		break;
5590ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
5591ab5bb018SKeith M Wesolowski 		brand = amd_cpubrand(cpi);
5592ab5bb018SKeith M Wesolowski 		break;
5593ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Cyrix:
5594ab5bb018SKeith M Wesolowski 		brand = cyrix_cpubrand(cpi, x86_type);
5595ab5bb018SKeith M Wesolowski 		break;
5596ab5bb018SKeith M Wesolowski 	case X86_VENDOR_NexGen:
5597ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 5 && cpi->cpi_model == 0)
5598ab5bb018SKeith M Wesolowski 			brand = "NexGen Nx586";
5599ab5bb018SKeith M Wesolowski 		break;
5600ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Centaur:
5601ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 5)
5602ab5bb018SKeith M Wesolowski 			switch (cpi->cpi_model) {
5603ab5bb018SKeith M Wesolowski 			case 4:
5604ab5bb018SKeith M Wesolowski 				brand = "Centaur C6";
5605ab5bb018SKeith M Wesolowski 				break;
5606ab5bb018SKeith M Wesolowski 			case 8:
5607ab5bb018SKeith M Wesolowski 				brand = "Centaur C2";
5608ab5bb018SKeith M Wesolowski 				break;
5609ab5bb018SKeith M Wesolowski 			case 9:
5610ab5bb018SKeith M Wesolowski 				brand = "Centaur C3";
5611ab5bb018SKeith M Wesolowski 				break;
5612ab5bb018SKeith M Wesolowski 			default:
5613ab5bb018SKeith M Wesolowski 				break;
5614ab5bb018SKeith M Wesolowski 			}
5615ab5bb018SKeith M Wesolowski 		break;
5616ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Rise:
5617ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 5 &&
5618ab5bb018SKeith M Wesolowski 		    (cpi->cpi_model == 0 || cpi->cpi_model == 2))
5619ab5bb018SKeith M Wesolowski 			brand = "Rise mP6";
5620ab5bb018SKeith M Wesolowski 		break;
5621ab5bb018SKeith M Wesolowski 	case X86_VENDOR_SiS:
5622ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 5 && cpi->cpi_model == 0)
5623ab5bb018SKeith M Wesolowski 			brand = "SiS 55x";
5624ab5bb018SKeith M Wesolowski 		break;
5625ab5bb018SKeith M Wesolowski 	case X86_VENDOR_TM:
5626ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family == 5 && cpi->cpi_model == 4)
5627ab5bb018SKeith M Wesolowski 			brand = "Transmeta Crusoe TM3x00 or TM5x00";
5628ab5bb018SKeith M Wesolowski 		break;
5629ab5bb018SKeith M Wesolowski 	case X86_VENDOR_NSC:
5630ab5bb018SKeith M Wesolowski 	case X86_VENDOR_UMC:
5631ab5bb018SKeith M Wesolowski 	default:
5632ab5bb018SKeith M Wesolowski 		break;
5633ab5bb018SKeith M Wesolowski 	}
5634ab5bb018SKeith M Wesolowski 	if (brand) {
5635ab5bb018SKeith M Wesolowski 		(void) strcpy((char *)cpi->cpi_brandstr, brand);
5636ab5bb018SKeith M Wesolowski 		return;
5637ab5bb018SKeith M Wesolowski 	}
5638ab5bb018SKeith M Wesolowski 
5639ab5bb018SKeith M Wesolowski 	/*
5640ab5bb018SKeith M Wesolowski 	 * If all else fails ...
5641ab5bb018SKeith M Wesolowski 	 */
5642ab5bb018SKeith M Wesolowski 	(void) snprintf(cpi->cpi_brandstr, sizeof (cpi->cpi_brandstr),
5643ab5bb018SKeith M Wesolowski 	    "%s %d.%d.%d", cpi->cpi_vendorstr, cpi->cpi_family,
5644ab5bb018SKeith M Wesolowski 	    cpi->cpi_model, cpi->cpi_step);
5645ab5bb018SKeith M Wesolowski }
5646ab5bb018SKeith M Wesolowski 
5647ab5bb018SKeith M Wesolowski /*
5648ab5bb018SKeith M Wesolowski  * This routine is called just after kernel memory allocation
5649ab5bb018SKeith M Wesolowski  * becomes available on cpu0, and as part of mp_startup() on
5650ab5bb018SKeith M Wesolowski  * the other cpus.
5651ab5bb018SKeith M Wesolowski  *
5652ab5bb018SKeith M Wesolowski  * Fixup the brand string, and collect any information from cpuid
5653ab5bb018SKeith M Wesolowski  * that requires dynamically allocated storage to represent.
5654ab5bb018SKeith M Wesolowski  */
5655ab5bb018SKeith M Wesolowski 
5656ab5bb018SKeith M Wesolowski static void
cpuid_pass_dynamic(cpu_t * cpu,void * _arg __unused)5657ab5bb018SKeith M Wesolowski cpuid_pass_dynamic(cpu_t *cpu, void *_arg __unused)
5658ab5bb018SKeith M Wesolowski {
5659ab5bb018SKeith M Wesolowski 	int	i, max, shft, level, size;
5660ab5bb018SKeith M Wesolowski 	struct cpuid_regs regs;
5661ab5bb018SKeith M Wesolowski 	struct cpuid_regs *cp;
5662ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
5663ab5bb018SKeith M Wesolowski 
5664ab5bb018SKeith M Wesolowski 	/*
5665ab5bb018SKeith M Wesolowski 	 * Deterministic cache parameters
5666ab5bb018SKeith M Wesolowski 	 *
5667ab5bb018SKeith M Wesolowski 	 * Intel uses leaf 0x4 for this, while AMD uses leaf 0x8000001d. The
5668ab5bb018SKeith M Wesolowski 	 * values that are present are currently defined to be the same. This
5669ab5bb018SKeith M Wesolowski 	 * means we can use the same logic to parse it as long as we use the
5670ab5bb018SKeith M Wesolowski 	 * appropriate leaf to get the data. If you're updating this, make sure
5671ab5bb018SKeith M Wesolowski 	 * you're careful about which vendor supports which aspect.
5672ab5bb018SKeith M Wesolowski 	 *
5673ab5bb018SKeith M Wesolowski 	 * Take this opportunity to detect the number of threads sharing the
5674ab5bb018SKeith M Wesolowski 	 * last level cache, and construct a corresponding cache id. The
5675ab5bb018SKeith M Wesolowski 	 * respective cpuid_info members are initialized to the default case of
5676ab5bb018SKeith M Wesolowski 	 * "no last level cache sharing".
5677ab5bb018SKeith M Wesolowski 	 */
5678ab5bb018SKeith M Wesolowski 	cpi->cpi_ncpu_shr_last_cache = 1;
5679ab5bb018SKeith M Wesolowski 	cpi->cpi_last_lvl_cacheid = cpu->cpu_id;
5680ab5bb018SKeith M Wesolowski 
5681ab5bb018SKeith M Wesolowski 	if ((cpi->cpi_maxeax >= 4 && cpi->cpi_vendor == X86_VENDOR_Intel) ||
5682ab5bb018SKeith M Wesolowski 	    ((cpi->cpi_vendor == X86_VENDOR_AMD ||
5683ab5bb018SKeith M Wesolowski 	    cpi->cpi_vendor == X86_VENDOR_HYGON) &&
5684ab5bb018SKeith M Wesolowski 	    cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1d &&
5685ab5bb018SKeith M Wesolowski 	    is_x86_feature(x86_featureset, X86FSET_TOPOEXT))) {
5686ab5bb018SKeith M Wesolowski 		uint32_t leaf;
5687ab5bb018SKeith M Wesolowski 
5688ab5bb018SKeith M Wesolowski 		if (cpi->cpi_vendor == X86_VENDOR_Intel) {
5689ab5bb018SKeith M Wesolowski 			leaf = 4;
5690ab5bb018SKeith M Wesolowski 		} else {
5691ab5bb018SKeith M Wesolowski 			leaf = CPUID_LEAF_EXT_1d;
5692ab5bb018SKeith M Wesolowski 		}
5693ab5bb018SKeith M Wesolowski 
5694ab5bb018SKeith M Wesolowski 		/*
5695ab5bb018SKeith M Wesolowski 		 * Find the # of elements (size) returned by the leaf and along
5696ab5bb018SKeith M Wesolowski 		 * the way detect last level cache sharing details.
5697ab5bb018SKeith M Wesolowski 		 */
5698ab5bb018SKeith M Wesolowski 		bzero(&regs, sizeof (regs));
5699ab5bb018SKeith M Wesolowski 		cp = &regs;
5700ab5bb018SKeith M Wesolowski 		for (i = 0, max = 0; i < CPI_FN4_ECX_MAX; i++) {
5701ab5bb018SKeith M Wesolowski 			cp->cp_eax = leaf;
5702ab5bb018SKeith M Wesolowski 			cp->cp_ecx = i;
5703ab5bb018SKeith M Wesolowski 
5704ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(cp);
5705ab5bb018SKeith M Wesolowski 
5706ab5bb018SKeith M Wesolowski 			if (CPI_CACHE_TYPE(cp) == 0)
5707ab5bb018SKeith M Wesolowski 				break;
5708ab5bb018SKeith M Wesolowski 			level = CPI_CACHE_LVL(cp);
5709ab5bb018SKeith M Wesolowski 			if (level > max) {
5710ab5bb018SKeith M Wesolowski 				max = level;
5711ab5bb018SKeith M Wesolowski 				cpi->cpi_ncpu_shr_last_cache =
5712ab5bb018SKeith M Wesolowski 				    CPI_NTHR_SHR_CACHE(cp) + 1;
5713ab5bb018SKeith M Wesolowski 			}
5714ab5bb018SKeith M Wesolowski 		}
5715ab5bb018SKeith M Wesolowski 		cpi->cpi_cache_leaf_size = size = i;
5716ab5bb018SKeith M Wesolowski 
5717ab5bb018SKeith M Wesolowski 		/*
5718ab5bb018SKeith M Wesolowski 		 * Allocate the cpi_cache_leaves array. The first element
5719ab5bb018SKeith M Wesolowski 		 * references the regs for the corresponding leaf with %ecx set
5720ab5bb018SKeith M Wesolowski 		 * to 0. This was gathered in cpuid_pass_extended().
5721ab5bb018SKeith M Wesolowski 		 */
5722ab5bb018SKeith M Wesolowski 		if (size > 0) {
5723ab5bb018SKeith M Wesolowski 			cpi->cpi_cache_leaves =
5724ab5bb018SKeith M Wesolowski 			    kmem_alloc(size * sizeof (cp), KM_SLEEP);
5725ab5bb018SKeith M Wesolowski 			if (cpi->cpi_vendor == X86_VENDOR_Intel) {
5726ab5bb018SKeith M Wesolowski 				cpi->cpi_cache_leaves[0] = &cpi->cpi_std[4];
5727ab5bb018SKeith M Wesolowski 			} else {
5728ab5bb018SKeith M Wesolowski 				cpi->cpi_cache_leaves[0] = &cpi->cpi_extd[0x1d];
5729ab5bb018SKeith M Wesolowski 			}
5730ab5bb018SKeith M Wesolowski 
5731ab5bb018SKeith M Wesolowski 			/*
5732ab5bb018SKeith M Wesolowski 			 * Allocate storage to hold the additional regs
5733ab5bb018SKeith M Wesolowski 			 * for the leaf, %ecx == 1 .. cpi_cache_leaf_size.
5734ab5bb018SKeith M Wesolowski 			 *
5735ab5bb018SKeith M Wesolowski 			 * The regs for the leaf, %ecx == 0 has already
5736ab5bb018SKeith M Wesolowski 			 * been allocated as indicated above.
5737ab5bb018SKeith M Wesolowski 			 */
5738ab5bb018SKeith M Wesolowski 			for (i = 1; i < size; i++) {
5739ab5bb018SKeith M Wesolowski 				cp = cpi->cpi_cache_leaves[i] =
5740ab5bb018SKeith M Wesolowski 				    kmem_zalloc(sizeof (regs), KM_SLEEP);
5741ab5bb018SKeith M Wesolowski 				cp->cp_eax = leaf;
5742ab5bb018SKeith M Wesolowski 				cp->cp_ecx = i;
5743ab5bb018SKeith M Wesolowski 
5744ab5bb018SKeith M Wesolowski 				(void) __cpuid_insn(cp);
5745ab5bb018SKeith M Wesolowski 			}
5746ab5bb018SKeith M Wesolowski 		}
5747ab5bb018SKeith M Wesolowski 		/*
5748ab5bb018SKeith M Wesolowski 		 * Determine the number of bits needed to represent
5749ab5bb018SKeith M Wesolowski 		 * the number of CPUs sharing the last level cache.
5750ab5bb018SKeith M Wesolowski 		 *
5751ab5bb018SKeith M Wesolowski 		 * Shift off that number of bits from the APIC id to
5752ab5bb018SKeith M Wesolowski 		 * derive the cache id.
5753ab5bb018SKeith M Wesolowski 		 */
5754ab5bb018SKeith M Wesolowski 		shft = 0;
5755ab5bb018SKeith M Wesolowski 		for (i = 1; i < cpi->cpi_ncpu_shr_last_cache; i <<= 1)
5756ab5bb018SKeith M Wesolowski 			shft++;
5757ab5bb018SKeith M Wesolowski 		cpi->cpi_last_lvl_cacheid = cpi->cpi_apicid >> shft;
5758ab5bb018SKeith M Wesolowski 	}
5759ab5bb018SKeith M Wesolowski 
5760ab5bb018SKeith M Wesolowski 	/*
5761ab5bb018SKeith M Wesolowski 	 * Now fixup the brand string
5762ab5bb018SKeith M Wesolowski 	 */
5763ab5bb018SKeith M Wesolowski 	if ((cpi->cpi_xmaxeax & CPUID_LEAF_EXT_0) == 0) {
5764ab5bb018SKeith M Wesolowski 		fabricate_brandstr(cpi);
5765ab5bb018SKeith M Wesolowski 	} else {
5766ab5bb018SKeith M Wesolowski 
5767ab5bb018SKeith M Wesolowski 		/*
5768ab5bb018SKeith M Wesolowski 		 * If we successfully extracted a brand string from the cpuid
5769ab5bb018SKeith M Wesolowski 		 * instruction, clean it up by removing leading spaces and
5770ab5bb018SKeith M Wesolowski 		 * similar junk.
5771ab5bb018SKeith M Wesolowski 		 */
5772ab5bb018SKeith M Wesolowski 		if (cpi->cpi_brandstr[0]) {
5773ab5bb018SKeith M Wesolowski 			size_t maxlen = sizeof (cpi->cpi_brandstr);
5774ab5bb018SKeith M Wesolowski 			char *src, *dst;
5775ab5bb018SKeith M Wesolowski 
5776ab5bb018SKeith M Wesolowski 			dst = src = (char *)cpi->cpi_brandstr;
5777ab5bb018SKeith M Wesolowski 			src[maxlen - 1] = '\0';
5778ab5bb018SKeith M Wesolowski 			/*
5779ab5bb018SKeith M Wesolowski 			 * strip leading spaces
5780ab5bb018SKeith M Wesolowski 			 */
5781ab5bb018SKeith M Wesolowski 			while (*src == ' ')
5782ab5bb018SKeith M Wesolowski 				src++;
5783ab5bb018SKeith M Wesolowski 			/*
5784ab5bb018SKeith M Wesolowski 			 * Remove any 'Genuine' or "Authentic" prefixes
5785ab5bb018SKeith M Wesolowski 			 */
5786ab5bb018SKeith M Wesolowski 			if (strncmp(src, "Genuine ", 8) == 0)
5787ab5bb018SKeith M Wesolowski 				src += 8;
5788ab5bb018SKeith M Wesolowski 			if (strncmp(src, "Authentic ", 10) == 0)
5789ab5bb018SKeith M Wesolowski 				src += 10;
5790ab5bb018SKeith M Wesolowski 
5791ab5bb018SKeith M Wesolowski 			/*
5792ab5bb018SKeith M Wesolowski 			 * Now do an in-place copy.
5793ab5bb018SKeith M Wesolowski 			 * Map (R) to (r) and (TM) to (tm).
5794ab5bb018SKeith M Wesolowski 			 * The era of teletypes is long gone, and there's
5795ab5bb018SKeith M Wesolowski 			 * -really- no need to shout.
5796ab5bb018SKeith M Wesolowski 			 */
5797ab5bb018SKeith M Wesolowski 			while (*src != '\0') {
5798ab5bb018SKeith M Wesolowski 				if (src[0] == '(') {
5799ab5bb018SKeith M Wesolowski 					if (strncmp(src + 1, "R)", 2) == 0) {
5800ab5bb018SKeith M Wesolowski 						(void) strncpy(dst, "(r)", 3);
5801ab5bb018SKeith M Wesolowski 						src += 3;
5802ab5bb018SKeith M Wesolowski 						dst += 3;
5803ab5bb018SKeith M Wesolowski 						continue;
5804ab5bb018SKeith M Wesolowski 					}
5805ab5bb018SKeith M Wesolowski 					if (strncmp(src + 1, "TM)", 3) == 0) {
5806ab5bb018SKeith M Wesolowski 						(void) strncpy(dst, "(tm)", 4);
5807ab5bb018SKeith M Wesolowski 						src += 4;
5808ab5bb018SKeith M Wesolowski 						dst += 4;
5809ab5bb018SKeith M Wesolowski 						continue;
5810ab5bb018SKeith M Wesolowski 					}
5811ab5bb018SKeith M Wesolowski 				}
5812ab5bb018SKeith M Wesolowski 				*dst++ = *src++;
5813ab5bb018SKeith M Wesolowski 			}
5814ab5bb018SKeith M Wesolowski 			*dst = '\0';
5815ab5bb018SKeith M Wesolowski 
5816ab5bb018SKeith M Wesolowski 			/*
5817ab5bb018SKeith M Wesolowski 			 * Finally, remove any trailing spaces
5818ab5bb018SKeith M Wesolowski 			 */
5819ab5bb018SKeith M Wesolowski 			while (--dst > cpi->cpi_brandstr)
5820ab5bb018SKeith M Wesolowski 				if (*dst == ' ')
5821ab5bb018SKeith M Wesolowski 					*dst = '\0';
5822ab5bb018SKeith M Wesolowski 				else
5823ab5bb018SKeith M Wesolowski 					break;
5824ab5bb018SKeith M Wesolowski 		} else
5825ab5bb018SKeith M Wesolowski 			fabricate_brandstr(cpi);
5826ab5bb018SKeith M Wesolowski 	}
5827ab5bb018SKeith M Wesolowski }
5828ab5bb018SKeith M Wesolowski 
582956726c7eSRobert Mustacchi typedef struct {
583056726c7eSRobert Mustacchi 	uint32_t avm_av;
583156726c7eSRobert Mustacchi 	uint32_t avm_feat;
583256726c7eSRobert Mustacchi } av_feat_map_t;
583356726c7eSRobert Mustacchi 
583456726c7eSRobert Mustacchi /*
583556726c7eSRobert Mustacchi  * These arrays are used to map features that we should add based on x86
583656726c7eSRobert Mustacchi  * features that are present. As a large number depend on kernel features,
583756726c7eSRobert Mustacchi  * rather than rechecking and clearing CPUID everywhere, we simply map these.
583856726c7eSRobert Mustacchi  * There is an array of these for each hwcap word. Some features aren't tracked
583956726c7eSRobert Mustacchi  * in the kernel x86 featureset and that's ok. They will not show up in here.
584056726c7eSRobert Mustacchi  */
584156726c7eSRobert Mustacchi static const av_feat_map_t x86fset_to_av1[] = {
584256726c7eSRobert Mustacchi 	{ AV_386_CX8, X86FSET_CX8 },
584356726c7eSRobert Mustacchi 	{ AV_386_SEP, X86FSET_SEP },
584456726c7eSRobert Mustacchi 	{ AV_386_AMD_SYSC, X86FSET_ASYSC },
584556726c7eSRobert Mustacchi 	{ AV_386_CMOV, X86FSET_CMOV },
584656726c7eSRobert Mustacchi 	{ AV_386_FXSR, X86FSET_SSE },
584756726c7eSRobert Mustacchi 	{ AV_386_SSE, X86FSET_SSE },
584856726c7eSRobert Mustacchi 	{ AV_386_SSE2, X86FSET_SSE2 },
584956726c7eSRobert Mustacchi 	{ AV_386_SSE3, X86FSET_SSE3 },
585056726c7eSRobert Mustacchi 	{ AV_386_CX16, X86FSET_CX16 },
585156726c7eSRobert Mustacchi 	{ AV_386_TSCP, X86FSET_TSCP },
585256726c7eSRobert Mustacchi 	{ AV_386_AMD_SSE4A, X86FSET_SSE4A },
585356726c7eSRobert Mustacchi 	{ AV_386_SSSE3, X86FSET_SSSE3 },
585456726c7eSRobert Mustacchi 	{ AV_386_SSE4_1, X86FSET_SSE4_1 },
585556726c7eSRobert Mustacchi 	{ AV_386_SSE4_2, X86FSET_SSE4_2 },
585656726c7eSRobert Mustacchi 	{ AV_386_AES, X86FSET_AES },
585756726c7eSRobert Mustacchi 	{ AV_386_PCLMULQDQ, X86FSET_PCLMULQDQ },
585856726c7eSRobert Mustacchi 	{ AV_386_XSAVE, X86FSET_XSAVE },
585956726c7eSRobert Mustacchi 	{ AV_386_AVX, X86FSET_AVX },
586056726c7eSRobert Mustacchi 	{ AV_386_VMX, X86FSET_VMX },
586156726c7eSRobert Mustacchi 	{ AV_386_AMD_SVM, X86FSET_SVM }
586256726c7eSRobert Mustacchi };
586356726c7eSRobert Mustacchi 
586456726c7eSRobert Mustacchi static const av_feat_map_t x86fset_to_av2[] = {
586556726c7eSRobert Mustacchi 	{ AV_386_2_F16C, X86FSET_F16C },
586656726c7eSRobert Mustacchi 	{ AV_386_2_RDRAND, X86FSET_RDRAND },
586756726c7eSRobert Mustacchi 	{ AV_386_2_BMI1, X86FSET_BMI1 },
586856726c7eSRobert Mustacchi 	{ AV_386_2_BMI2, X86FSET_BMI2 },
586956726c7eSRobert Mustacchi 	{ AV_386_2_FMA, X86FSET_FMA },
587056726c7eSRobert Mustacchi 	{ AV_386_2_AVX2, X86FSET_AVX2 },
587156726c7eSRobert Mustacchi 	{ AV_386_2_ADX, X86FSET_ADX },
587256726c7eSRobert Mustacchi 	{ AV_386_2_RDSEED, X86FSET_RDSEED },
587356726c7eSRobert Mustacchi 	{ AV_386_2_AVX512F, X86FSET_AVX512F },
587456726c7eSRobert Mustacchi 	{ AV_386_2_AVX512DQ, X86FSET_AVX512DQ },
587556726c7eSRobert Mustacchi 	{ AV_386_2_AVX512IFMA, X86FSET_AVX512FMA },
587656726c7eSRobert Mustacchi 	{ AV_386_2_AVX512PF, X86FSET_AVX512PF },
587756726c7eSRobert Mustacchi 	{ AV_386_2_AVX512ER, X86FSET_AVX512ER },
587856726c7eSRobert Mustacchi 	{ AV_386_2_AVX512CD, X86FSET_AVX512CD },
587956726c7eSRobert Mustacchi 	{ AV_386_2_AVX512BW, X86FSET_AVX512BW },
588056726c7eSRobert Mustacchi 	{ AV_386_2_AVX512VL, X86FSET_AVX512VL },
588156726c7eSRobert Mustacchi 	{ AV_386_2_AVX512VBMI, X86FSET_AVX512VBMI },
588256726c7eSRobert Mustacchi 	{ AV_386_2_AVX512VPOPCDQ, X86FSET_AVX512VPOPCDQ },
588356726c7eSRobert Mustacchi 	{ AV_386_2_SHA, X86FSET_SHA },
588456726c7eSRobert Mustacchi 	{ AV_386_2_FSGSBASE, X86FSET_FSGSBASE },
588556726c7eSRobert Mustacchi 	{ AV_386_2_CLFLUSHOPT, X86FSET_CLFLUSHOPT },
588656726c7eSRobert Mustacchi 	{ AV_386_2_CLWB, X86FSET_CLWB },
588756726c7eSRobert Mustacchi 	{ AV_386_2_MONITORX, X86FSET_MONITORX },
588856726c7eSRobert Mustacchi 	{ AV_386_2_CLZERO, X86FSET_CLZERO },
588956726c7eSRobert Mustacchi 	{ AV_386_2_AVX512_VNNI, X86FSET_AVX512VNNI },
589056726c7eSRobert Mustacchi 	{ AV_386_2_VPCLMULQDQ, X86FSET_VPCLMULQDQ },
589156726c7eSRobert Mustacchi 	{ AV_386_2_VAES, X86FSET_VAES },
589256726c7eSRobert Mustacchi 	{ AV_386_2_GFNI, X86FSET_GFNI },
589356726c7eSRobert Mustacchi 	{ AV_386_2_AVX512_VP2INT, X86FSET_AVX512_VP2INT },
589456726c7eSRobert Mustacchi 	{ AV_386_2_AVX512_BITALG, X86FSET_AVX512_BITALG }
589556726c7eSRobert Mustacchi };
589656726c7eSRobert Mustacchi 
589756726c7eSRobert Mustacchi static const av_feat_map_t x86fset_to_av3[] = {
589856726c7eSRobert Mustacchi 	{ AV_386_3_AVX512_VBMI2, X86FSET_AVX512_VBMI2 },
589956726c7eSRobert Mustacchi 	{ AV_386_3_AVX512_BF16, X86FSET_AVX512_BF16 }
590056726c7eSRobert Mustacchi };
590156726c7eSRobert Mustacchi 
5902ab5bb018SKeith M Wesolowski /*
5903ab5bb018SKeith M Wesolowski  * This routine is called out of bind_hwcap() much later in the life
5904ab5bb018SKeith M Wesolowski  * of the kernel (post_startup()).  The job of this routine is to resolve
5905ab5bb018SKeith M Wesolowski  * the hardware feature support and kernel support for those features into
5906ab5bb018SKeith M Wesolowski  * what we're actually going to tell applications via the aux vector.
590756726c7eSRobert Mustacchi  *
590856726c7eSRobert Mustacchi  * Most of the aux vector is derived from the x86_featureset array vector where
590956726c7eSRobert Mustacchi  * a given feature indicates that an aux vector should be plumbed through. This
591056726c7eSRobert Mustacchi  * allows the kernel to use one tracking mechanism for these based on whether or
591156726c7eSRobert Mustacchi  * not it has the required hardware support (most often xsave). Most newer
591256726c7eSRobert Mustacchi  * features are added there in case we need them in the kernel. Otherwise,
591356726c7eSRobert Mustacchi  * features are evaluated based on looking at the cpuid features that remain. If
591456726c7eSRobert Mustacchi  * you find yourself wanting to clear out cpuid features for some reason, they
591556726c7eSRobert Mustacchi  * should instead be driven by the feature set so we have a consistent view.
5916ab5bb018SKeith M Wesolowski  */
5917ab5bb018SKeith M Wesolowski 
5918ab5bb018SKeith M Wesolowski static void
cpuid_pass_resolve(cpu_t * cpu,void * arg)5919ab5bb018SKeith M Wesolowski cpuid_pass_resolve(cpu_t *cpu, void *arg)
5920ab5bb018SKeith M Wesolowski {
5921ab5bb018SKeith M Wesolowski 	uint_t *hwcap_out = (uint_t *)arg;
5922ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
592356726c7eSRobert Mustacchi 	uint_t hwcap_flags = 0, hwcap_flags_2 = 0, hwcap_flags_3 = 0;
5924ab5bb018SKeith M Wesolowski 
5925ab5bb018SKeith M Wesolowski 	cpi = cpu->cpu_m.mcpu_cpi;
5926ab5bb018SKeith M Wesolowski 
592756726c7eSRobert Mustacchi 	for (uint_t i = 0; i < ARRAY_SIZE(x86fset_to_av1); i++) {
592856726c7eSRobert Mustacchi 		if (is_x86_feature(x86_featureset,
592956726c7eSRobert Mustacchi 		    x86fset_to_av1[i].avm_feat)) {
593056726c7eSRobert Mustacchi 			hwcap_flags |= x86fset_to_av1[i].avm_av;
593156726c7eSRobert Mustacchi 		}
593256726c7eSRobert Mustacchi 	}
593356726c7eSRobert Mustacchi 
593456726c7eSRobert Mustacchi 	for (uint_t i = 0; i < ARRAY_SIZE(x86fset_to_av2); i++) {
593556726c7eSRobert Mustacchi 		if (is_x86_feature(x86_featureset,
593656726c7eSRobert Mustacchi 		    x86fset_to_av2[i].avm_feat)) {
593756726c7eSRobert Mustacchi 			hwcap_flags_2 |= x86fset_to_av2[i].avm_av;
593856726c7eSRobert Mustacchi 		}
593956726c7eSRobert Mustacchi 	}
594056726c7eSRobert Mustacchi 
594156726c7eSRobert Mustacchi 	for (uint_t i = 0; i < ARRAY_SIZE(x86fset_to_av3); i++) {
594256726c7eSRobert Mustacchi 		if (is_x86_feature(x86_featureset,
594356726c7eSRobert Mustacchi 		    x86fset_to_av3[i].avm_feat)) {
594456726c7eSRobert Mustacchi 			hwcap_flags_3 |= x86fset_to_av3[i].avm_av;
594556726c7eSRobert Mustacchi 		}
594656726c7eSRobert Mustacchi 	}
594756726c7eSRobert Mustacchi 
594856726c7eSRobert Mustacchi 	/*
594956726c7eSRobert Mustacchi 	 * From here on out we're working through features that don't have
595056726c7eSRobert Mustacchi 	 * corresponding kernel feature flags for various reasons that are
595156726c7eSRobert Mustacchi 	 * mostly just due to the historical implementation.
595256726c7eSRobert Mustacchi 	 */
5953ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax >= 1) {
5954ab5bb018SKeith M Wesolowski 		uint32_t *edx = &cpi->cpi_support[STD_EDX_FEATURES];
5955ab5bb018SKeith M Wesolowski 		uint32_t *ecx = &cpi->cpi_support[STD_ECX_FEATURES];
5956ab5bb018SKeith M Wesolowski 
5957ab5bb018SKeith M Wesolowski 		*edx = CPI_FEATURES_EDX(cpi);
5958ab5bb018SKeith M Wesolowski 		*ecx = CPI_FEATURES_ECX(cpi);
5959ab5bb018SKeith M Wesolowski 
5960ab5bb018SKeith M Wesolowski 		/*
5961ab5bb018SKeith M Wesolowski 		 * [no explicit support required beyond x87 fp context]
5962ab5bb018SKeith M Wesolowski 		 */
5963ab5bb018SKeith M Wesolowski 		if (!fpu_exists)
5964ab5bb018SKeith M Wesolowski 			*edx &= ~(CPUID_INTC_EDX_FPU | CPUID_INTC_EDX_MMX);
5965ab5bb018SKeith M Wesolowski 
5966ab5bb018SKeith M Wesolowski 		/*
5967ab5bb018SKeith M Wesolowski 		 * Now map the supported feature vector to things that we
5968ab5bb018SKeith M Wesolowski 		 * think userland will care about.
5969ab5bb018SKeith M Wesolowski 		 */
5970ab5bb018SKeith M Wesolowski 		if (*ecx & CPUID_INTC_ECX_MOVBE)
5971ab5bb018SKeith M Wesolowski 			hwcap_flags |= AV_386_MOVBE;
5972ab5bb018SKeith M Wesolowski 
5973ab5bb018SKeith M Wesolowski 		if (*ecx & CPUID_INTC_ECX_POPCNT)
5974ab5bb018SKeith M Wesolowski 			hwcap_flags |= AV_386_POPCNT;
5975ab5bb018SKeith M Wesolowski 		if (*edx & CPUID_INTC_EDX_FPU)
5976ab5bb018SKeith M Wesolowski 			hwcap_flags |= AV_386_FPU;
5977ab5bb018SKeith M Wesolowski 		if (*edx & CPUID_INTC_EDX_MMX)
5978ab5bb018SKeith M Wesolowski 			hwcap_flags |= AV_386_MMX;
5979ab5bb018SKeith M Wesolowski 		if (*edx & CPUID_INTC_EDX_TSC)
5980ab5bb018SKeith M Wesolowski 			hwcap_flags |= AV_386_TSC;
5981ab5bb018SKeith M Wesolowski 	}
5982ab5bb018SKeith M Wesolowski 
598356726c7eSRobert Mustacchi 	/*
598456726c7eSRobert Mustacchi 	 * Check a few miscellaneous features.
598556726c7eSRobert Mustacchi 	 */
5986ab5bb018SKeith M Wesolowski 	if (cpi->cpi_xmaxeax < 0x80000001)
5987ab5bb018SKeith M Wesolowski 		goto resolve_done;
5988ab5bb018SKeith M Wesolowski 
5989ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
5990ab5bb018SKeith M Wesolowski 		uint32_t *edx, *ecx;
5991ab5bb018SKeith M Wesolowski 
5992ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
5993ab5bb018SKeith M Wesolowski 		/*
5994ab5bb018SKeith M Wesolowski 		 * Seems like Intel duplicated what we necessary
5995ab5bb018SKeith M Wesolowski 		 * here to make the initial crop of 64-bit OS's work.
5996ab5bb018SKeith M Wesolowski 		 * Hopefully, those are the only "extended" bits
5997ab5bb018SKeith M Wesolowski 		 * they'll add.
5998ab5bb018SKeith M Wesolowski 		 */
5999ab5bb018SKeith M Wesolowski 		/*FALLTHROUGH*/
6000ab5bb018SKeith M Wesolowski 
6001ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
6002ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
6003ab5bb018SKeith M Wesolowski 		edx = &cpi->cpi_support[AMD_EDX_FEATURES];
6004ab5bb018SKeith M Wesolowski 		ecx = &cpi->cpi_support[AMD_ECX_FEATURES];
6005ab5bb018SKeith M Wesolowski 
6006ab5bb018SKeith M Wesolowski 		*edx = CPI_FEATURES_XTD_EDX(cpi);
6007ab5bb018SKeith M Wesolowski 		*ecx = CPI_FEATURES_XTD_ECX(cpi);
6008ab5bb018SKeith M Wesolowski 
6009ab5bb018SKeith M Wesolowski 		/*
6010ab5bb018SKeith M Wesolowski 		 * [no explicit support required beyond
6011ab5bb018SKeith M Wesolowski 		 * x87 fp context and exception handlers]
6012ab5bb018SKeith M Wesolowski 		 */
6013ab5bb018SKeith M Wesolowski 		if (!fpu_exists)
6014ab5bb018SKeith M Wesolowski 			*edx &= ~(CPUID_AMD_EDX_MMXamd |
6015ab5bb018SKeith M Wesolowski 			    CPUID_AMD_EDX_3DNow | CPUID_AMD_EDX_3DNowx);
6016ab5bb018SKeith M Wesolowski 
6017ab5bb018SKeith M Wesolowski 		/*
6018ab5bb018SKeith M Wesolowski 		 * Now map the supported feature vector to
6019ab5bb018SKeith M Wesolowski 		 * things that we think userland will care about.
6020ab5bb018SKeith M Wesolowski 		 */
6021ab5bb018SKeith M Wesolowski 		if (*edx & CPUID_AMD_EDX_MMXamd)
6022ab5bb018SKeith M Wesolowski 			hwcap_flags |= AV_386_AMD_MMX;
6023ab5bb018SKeith M Wesolowski 		if (*edx & CPUID_AMD_EDX_3DNow)
6024ab5bb018SKeith M Wesolowski 			hwcap_flags |= AV_386_AMD_3DNow;
6025ab5bb018SKeith M Wesolowski 		if (*edx & CPUID_AMD_EDX_3DNowx)
6026ab5bb018SKeith M Wesolowski 			hwcap_flags |= AV_386_AMD_3DNowx;
6027ab5bb018SKeith M Wesolowski 
6028ab5bb018SKeith M Wesolowski 		switch (cpi->cpi_vendor) {
6029ab5bb018SKeith M Wesolowski 		case X86_VENDOR_AMD:
6030ab5bb018SKeith M Wesolowski 		case X86_VENDOR_HYGON:
6031ab5bb018SKeith M Wesolowski 			if (*ecx & CPUID_AMD_ECX_AHF64)
6032ab5bb018SKeith M Wesolowski 				hwcap_flags |= AV_386_AHF;
6033ab5bb018SKeith M Wesolowski 			if (*ecx & CPUID_AMD_ECX_LZCNT)
6034ab5bb018SKeith M Wesolowski 				hwcap_flags |= AV_386_AMD_LZCNT;
6035ab5bb018SKeith M Wesolowski 			break;
6036ab5bb018SKeith M Wesolowski 
6037ab5bb018SKeith M Wesolowski 		case X86_VENDOR_Intel:
6038ab5bb018SKeith M Wesolowski 			if (*ecx & CPUID_AMD_ECX_LZCNT)
6039ab5bb018SKeith M Wesolowski 				hwcap_flags |= AV_386_AMD_LZCNT;
6040ab5bb018SKeith M Wesolowski 			/*
6041ab5bb018SKeith M Wesolowski 			 * Aarrgh.
6042ab5bb018SKeith M Wesolowski 			 * Intel uses a different bit in the same word.
6043ab5bb018SKeith M Wesolowski 			 */
6044ab5bb018SKeith M Wesolowski 			if (*ecx & CPUID_INTC_ECX_AHF64)
6045ab5bb018SKeith M Wesolowski 				hwcap_flags |= AV_386_AHF;
6046ab5bb018SKeith M Wesolowski 			break;
6047ab5bb018SKeith M Wesolowski 		default:
6048ab5bb018SKeith M Wesolowski 			break;
6049ab5bb018SKeith M Wesolowski 		}
6050ab5bb018SKeith M Wesolowski 		break;
6051ab5bb018SKeith M Wesolowski 
6052ab5bb018SKeith M Wesolowski 	default:
6053ab5bb018SKeith M Wesolowski 		break;
6054ab5bb018SKeith M Wesolowski 	}
6055ab5bb018SKeith M Wesolowski 
6056ab5bb018SKeith M Wesolowski resolve_done:
6057ab5bb018SKeith M Wesolowski 	if (hwcap_out != NULL) {
6058ab5bb018SKeith M Wesolowski 		hwcap_out[0] = hwcap_flags;
6059ab5bb018SKeith M Wesolowski 		hwcap_out[1] = hwcap_flags_2;
606056726c7eSRobert Mustacchi 		hwcap_out[2] = hwcap_flags_3;
6061ab5bb018SKeith M Wesolowski 	}
6062ab5bb018SKeith M Wesolowski }
6063ab5bb018SKeith M Wesolowski 
6064ab5bb018SKeith M Wesolowski 
6065ab5bb018SKeith M Wesolowski /*
6066ab5bb018SKeith M Wesolowski  * Simulate the cpuid instruction using the data we previously
6067ab5bb018SKeith M Wesolowski  * captured about this CPU.  We try our best to return the truth
6068ab5bb018SKeith M Wesolowski  * about the hardware, independently of kernel support.
6069ab5bb018SKeith M Wesolowski  */
6070ab5bb018SKeith M Wesolowski uint32_t
cpuid_insn(cpu_t * cpu,struct cpuid_regs * cp)6071ab5bb018SKeith M Wesolowski cpuid_insn(cpu_t *cpu, struct cpuid_regs *cp)
6072ab5bb018SKeith M Wesolowski {
6073ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
6074ab5bb018SKeith M Wesolowski 	struct cpuid_regs *xcp;
6075ab5bb018SKeith M Wesolowski 
6076ab5bb018SKeith M Wesolowski 	if (cpu == NULL)
6077ab5bb018SKeith M Wesolowski 		cpu = CPU;
6078ab5bb018SKeith M Wesolowski 	cpi = cpu->cpu_m.mcpu_cpi;
6079ab5bb018SKeith M Wesolowski 
6080ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_DYNAMIC));
6081ab5bb018SKeith M Wesolowski 
6082ab5bb018SKeith M Wesolowski 	/*
6083ab5bb018SKeith M Wesolowski 	 * CPUID data is cached in two separate places: cpi_std for standard
6084ab5bb018SKeith M Wesolowski 	 * CPUID leaves , and cpi_extd for extended CPUID leaves.
6085ab5bb018SKeith M Wesolowski 	 */
6086ab5bb018SKeith M Wesolowski 	if (cp->cp_eax <= cpi->cpi_maxeax && cp->cp_eax < NMAX_CPI_STD) {
6087ab5bb018SKeith M Wesolowski 		xcp = &cpi->cpi_std[cp->cp_eax];
6088ab5bb018SKeith M Wesolowski 	} else if (cp->cp_eax >= CPUID_LEAF_EXT_0 &&
6089ab5bb018SKeith M Wesolowski 	    cp->cp_eax <= cpi->cpi_xmaxeax &&
6090ab5bb018SKeith M Wesolowski 	    cp->cp_eax < CPUID_LEAF_EXT_0 + NMAX_CPI_EXTD) {
6091ab5bb018SKeith M Wesolowski 		xcp = &cpi->cpi_extd[cp->cp_eax - CPUID_LEAF_EXT_0];
6092ab5bb018SKeith M Wesolowski 	} else {
6093ab5bb018SKeith M Wesolowski 		/*
6094ab5bb018SKeith M Wesolowski 		 * The caller is asking for data from an input parameter which
6095ab5bb018SKeith M Wesolowski 		 * the kernel has not cached.  In this case we go fetch from
6096ab5bb018SKeith M Wesolowski 		 * the hardware and return the data directly to the user.
6097ab5bb018SKeith M Wesolowski 		 */
6098ab5bb018SKeith M Wesolowski 		return (__cpuid_insn(cp));
6099ab5bb018SKeith M Wesolowski 	}
6100ab5bb018SKeith M Wesolowski 
6101ab5bb018SKeith M Wesolowski 	cp->cp_eax = xcp->cp_eax;
6102ab5bb018SKeith M Wesolowski 	cp->cp_ebx = xcp->cp_ebx;
6103ab5bb018SKeith M Wesolowski 	cp->cp_ecx = xcp->cp_ecx;
6104ab5bb018SKeith M Wesolowski 	cp->cp_edx = xcp->cp_edx;
6105ab5bb018SKeith M Wesolowski 	return (cp->cp_eax);
6106ab5bb018SKeith M Wesolowski }
6107ab5bb018SKeith M Wesolowski 
6108ab5bb018SKeith M Wesolowski boolean_t
cpuid_checkpass(const cpu_t * const cpu,const cpuid_pass_t pass)6109ab5bb018SKeith M Wesolowski cpuid_checkpass(const cpu_t *const cpu, const cpuid_pass_t pass)
6110ab5bb018SKeith M Wesolowski {
6111ab5bb018SKeith M Wesolowski 	return (cpu != NULL && cpu->cpu_m.mcpu_cpi != NULL &&
6112ab5bb018SKeith M Wesolowski 	    cpu->cpu_m.mcpu_cpi->cpi_pass >= pass);
6113ab5bb018SKeith M Wesolowski }
6114ab5bb018SKeith M Wesolowski 
6115ab5bb018SKeith M Wesolowski int
cpuid_getbrandstr(cpu_t * cpu,char * s,size_t n)6116ab5bb018SKeith M Wesolowski cpuid_getbrandstr(cpu_t *cpu, char *s, size_t n)
6117ab5bb018SKeith M Wesolowski {
6118ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_DYNAMIC));
6119ab5bb018SKeith M Wesolowski 
6120ab5bb018SKeith M Wesolowski 	return (snprintf(s, n, "%s", cpu->cpu_m.mcpu_cpi->cpi_brandstr));
6121ab5bb018SKeith M Wesolowski }
6122ab5bb018SKeith M Wesolowski 
6123ab5bb018SKeith M Wesolowski int
cpuid_is_cmt(cpu_t * cpu)6124ab5bb018SKeith M Wesolowski cpuid_is_cmt(cpu_t *cpu)
6125ab5bb018SKeith M Wesolowski {
6126ab5bb018SKeith M Wesolowski 	if (cpu == NULL)
6127ab5bb018SKeith M Wesolowski 		cpu = CPU;
6128ab5bb018SKeith M Wesolowski 
6129ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6130ab5bb018SKeith M Wesolowski 
6131ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_chipid >= 0);
6132ab5bb018SKeith M Wesolowski }
6133ab5bb018SKeith M Wesolowski 
6134ab5bb018SKeith M Wesolowski /*
6135ab5bb018SKeith M Wesolowski  * AMD and Intel both implement the 64-bit variant of the syscall
6136ab5bb018SKeith M Wesolowski  * instruction (syscallq), so if there's -any- support for syscall,
6137ab5bb018SKeith M Wesolowski  * cpuid currently says "yes, we support this".
6138ab5bb018SKeith M Wesolowski  *
6139ab5bb018SKeith M Wesolowski  * However, Intel decided to -not- implement the 32-bit variant of the
6140ab5bb018SKeith M Wesolowski  * syscall instruction, so we provide a predicate to allow our caller
6141ab5bb018SKeith M Wesolowski  * to test that subtlety here.
6142ab5bb018SKeith M Wesolowski  *
6143ab5bb018SKeith M Wesolowski  * XXPV	Currently, 32-bit syscall instructions don't work via the hypervisor,
6144ab5bb018SKeith M Wesolowski  *	even in the case where the hardware would in fact support it.
6145ab5bb018SKeith M Wesolowski  */
6146ab5bb018SKeith M Wesolowski /*ARGSUSED*/
6147ab5bb018SKeith M Wesolowski int
cpuid_syscall32_insn(cpu_t * cpu)6148ab5bb018SKeith M Wesolowski cpuid_syscall32_insn(cpu_t *cpu)
6149ab5bb018SKeith M Wesolowski {
6150ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass((cpu == NULL ? CPU : cpu), CPUID_PASS_BASIC));
6151ab5bb018SKeith M Wesolowski 
6152ab5bb018SKeith M Wesolowski #if !defined(__xpv)
6153ab5bb018SKeith M Wesolowski 	if (cpu == NULL)
6154ab5bb018SKeith M Wesolowski 		cpu = CPU;
6155ab5bb018SKeith M Wesolowski 
6156ab5bb018SKeith M Wesolowski 	/*CSTYLED*/
6157ab5bb018SKeith M Wesolowski 	{
6158ab5bb018SKeith M Wesolowski 		struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
6159ab5bb018SKeith M Wesolowski 
6160ab5bb018SKeith M Wesolowski 		if ((cpi->cpi_vendor == X86_VENDOR_AMD ||
6161ab5bb018SKeith M Wesolowski 		    cpi->cpi_vendor == X86_VENDOR_HYGON) &&
6162ab5bb018SKeith M Wesolowski 		    cpi->cpi_xmaxeax >= 0x80000001 &&
6163ab5bb018SKeith M Wesolowski 		    (CPI_FEATURES_XTD_EDX(cpi) & CPUID_AMD_EDX_SYSC))
6164ab5bb018SKeith M Wesolowski 			return (1);
6165ab5bb018SKeith M Wesolowski 	}
6166ab5bb018SKeith M Wesolowski #endif
6167ab5bb018SKeith M Wesolowski 	return (0);
6168ab5bb018SKeith M Wesolowski }
6169ab5bb018SKeith M Wesolowski 
6170ab5bb018SKeith M Wesolowski int
cpuid_getidstr(cpu_t * cpu,char * s,size_t n)6171ab5bb018SKeith M Wesolowski cpuid_getidstr(cpu_t *cpu, char *s, size_t n)
6172ab5bb018SKeith M Wesolowski {
6173ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
6174ab5bb018SKeith M Wesolowski 
6175ab5bb018SKeith M Wesolowski 	static const char fmt[] =
6176ab5bb018SKeith M Wesolowski 	    "x86 (%s %X family %d model %d step %d clock %d MHz)";
6177ab5bb018SKeith M Wesolowski 	static const char fmt_ht[] =
6178ab5bb018SKeith M Wesolowski 	    "x86 (chipid 0x%x %s %X family %d model %d step %d clock %d MHz)";
6179ab5bb018SKeith M Wesolowski 
6180ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6181ab5bb018SKeith M Wesolowski 
6182ab5bb018SKeith M Wesolowski 	if (cpuid_is_cmt(cpu))
6183ab5bb018SKeith M Wesolowski 		return (snprintf(s, n, fmt_ht, cpi->cpi_chipid,
6184ab5bb018SKeith M Wesolowski 		    cpi->cpi_vendorstr, cpi->cpi_std[1].cp_eax,
6185ab5bb018SKeith M Wesolowski 		    cpi->cpi_family, cpi->cpi_model,
6186ab5bb018SKeith M Wesolowski 		    cpi->cpi_step, cpu->cpu_type_info.pi_clock));
6187ab5bb018SKeith M Wesolowski 	return (snprintf(s, n, fmt,
6188ab5bb018SKeith M Wesolowski 	    cpi->cpi_vendorstr, cpi->cpi_std[1].cp_eax,
6189ab5bb018SKeith M Wesolowski 	    cpi->cpi_family, cpi->cpi_model,
6190ab5bb018SKeith M Wesolowski 	    cpi->cpi_step, cpu->cpu_type_info.pi_clock));
6191ab5bb018SKeith M Wesolowski }
6192ab5bb018SKeith M Wesolowski 
6193ab5bb018SKeith M Wesolowski const char *
cpuid_getvendorstr(cpu_t * cpu)6194ab5bb018SKeith M Wesolowski cpuid_getvendorstr(cpu_t *cpu)
6195ab5bb018SKeith M Wesolowski {
6196ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_IDENT));
6197ab5bb018SKeith M Wesolowski 	return ((const char *)cpu->cpu_m.mcpu_cpi->cpi_vendorstr);
6198ab5bb018SKeith M Wesolowski }
6199ab5bb018SKeith M Wesolowski 
6200ab5bb018SKeith M Wesolowski uint_t
cpuid_getvendor(cpu_t * cpu)6201ab5bb018SKeith M Wesolowski cpuid_getvendor(cpu_t *cpu)
6202ab5bb018SKeith M Wesolowski {
6203ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_IDENT));
6204ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_vendor);
6205ab5bb018SKeith M Wesolowski }
6206ab5bb018SKeith M Wesolowski 
6207ab5bb018SKeith M Wesolowski uint_t
cpuid_getfamily(cpu_t * cpu)6208ab5bb018SKeith M Wesolowski cpuid_getfamily(cpu_t *cpu)
6209ab5bb018SKeith M Wesolowski {
6210ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_IDENT));
6211ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_family);
6212ab5bb018SKeith M Wesolowski }
6213ab5bb018SKeith M Wesolowski 
6214ab5bb018SKeith M Wesolowski uint_t
cpuid_getmodel(cpu_t * cpu)6215ab5bb018SKeith M Wesolowski cpuid_getmodel(cpu_t *cpu)
6216ab5bb018SKeith M Wesolowski {
6217ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_IDENT));
6218ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_model);
6219ab5bb018SKeith M Wesolowski }
6220ab5bb018SKeith M Wesolowski 
6221ab5bb018SKeith M Wesolowski uint_t
cpuid_get_ncpu_per_chip(cpu_t * cpu)6222ab5bb018SKeith M Wesolowski cpuid_get_ncpu_per_chip(cpu_t *cpu)
6223ab5bb018SKeith M Wesolowski {
6224ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6225ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_ncpu_per_chip);
6226ab5bb018SKeith M Wesolowski }
6227ab5bb018SKeith M Wesolowski 
6228ab5bb018SKeith M Wesolowski uint_t
cpuid_get_ncore_per_chip(cpu_t * cpu)6229ab5bb018SKeith M Wesolowski cpuid_get_ncore_per_chip(cpu_t *cpu)
6230ab5bb018SKeith M Wesolowski {
6231ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6232ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_ncore_per_chip);
6233ab5bb018SKeith M Wesolowski }
6234ab5bb018SKeith M Wesolowski 
6235ab5bb018SKeith M Wesolowski uint_t
cpuid_get_ncpu_sharing_last_cache(cpu_t * cpu)6236ab5bb018SKeith M Wesolowski cpuid_get_ncpu_sharing_last_cache(cpu_t *cpu)
6237ab5bb018SKeith M Wesolowski {
6238ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_EXTENDED));
6239ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_ncpu_shr_last_cache);
6240ab5bb018SKeith M Wesolowski }
6241ab5bb018SKeith M Wesolowski 
6242ab5bb018SKeith M Wesolowski id_t
cpuid_get_last_lvl_cacheid(cpu_t * cpu)6243ab5bb018SKeith M Wesolowski cpuid_get_last_lvl_cacheid(cpu_t *cpu)
6244ab5bb018SKeith M Wesolowski {
6245ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_EXTENDED));
6246ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_last_lvl_cacheid);
6247ab5bb018SKeith M Wesolowski }
6248ab5bb018SKeith M Wesolowski 
6249ab5bb018SKeith M Wesolowski uint_t
cpuid_getstep(cpu_t * cpu)6250ab5bb018SKeith M Wesolowski cpuid_getstep(cpu_t *cpu)
6251ab5bb018SKeith M Wesolowski {
6252ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_IDENT));
6253ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_step);
6254ab5bb018SKeith M Wesolowski }
6255ab5bb018SKeith M Wesolowski 
6256ab5bb018SKeith M Wesolowski uint_t
cpuid_getsig(struct cpu * cpu)6257ab5bb018SKeith M Wesolowski cpuid_getsig(struct cpu *cpu)
6258ab5bb018SKeith M Wesolowski {
6259df5a0c57SLuqman Aden 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_IDENT));
6260ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_std[1].cp_eax);
6261ab5bb018SKeith M Wesolowski }
6262ab5bb018SKeith M Wesolowski 
62639c2f2158SToomas Soome x86_chiprev_t
cpuid_getchiprev(struct cpu * cpu)6264ab5bb018SKeith M Wesolowski cpuid_getchiprev(struct cpu *cpu)
6265ab5bb018SKeith M Wesolowski {
6266ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_IDENT));
6267ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_chiprev);
6268ab5bb018SKeith M Wesolowski }
6269ab5bb018SKeith M Wesolowski 
6270ab5bb018SKeith M Wesolowski const char *
cpuid_getchiprevstr(struct cpu * cpu)6271ab5bb018SKeith M Wesolowski cpuid_getchiprevstr(struct cpu *cpu)
6272ab5bb018SKeith M Wesolowski {
6273ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_IDENT));
6274ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_chiprevstr);
6275ab5bb018SKeith M Wesolowski }
6276ab5bb018SKeith M Wesolowski 
6277ab5bb018SKeith M Wesolowski uint32_t
cpuid_getsockettype(struct cpu * cpu)6278ab5bb018SKeith M Wesolowski cpuid_getsockettype(struct cpu *cpu)
6279ab5bb018SKeith M Wesolowski {
6280ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_IDENT));
6281ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_socket);
6282ab5bb018SKeith M Wesolowski }
6283ab5bb018SKeith M Wesolowski 
6284ab5bb018SKeith M Wesolowski const char *
cpuid_getsocketstr(cpu_t * cpu)6285ab5bb018SKeith M Wesolowski cpuid_getsocketstr(cpu_t *cpu)
6286ab5bb018SKeith M Wesolowski {
6287ab5bb018SKeith M Wesolowski 	static const char *socketstr = NULL;
6288ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
6289ab5bb018SKeith M Wesolowski 
6290ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_IDENT));
6291ab5bb018SKeith M Wesolowski 	cpi = cpu->cpu_m.mcpu_cpi;
6292ab5bb018SKeith M Wesolowski 
6293ab5bb018SKeith M Wesolowski 	/* Assume that socket types are the same across the system */
6294ab5bb018SKeith M Wesolowski 	if (socketstr == NULL)
6295ab5bb018SKeith M Wesolowski 		socketstr = _cpuid_sktstr(cpi->cpi_vendor, cpi->cpi_family,
6296ab5bb018SKeith M Wesolowski 		    cpi->cpi_model, cpi->cpi_step);
6297ab5bb018SKeith M Wesolowski 
6298ab5bb018SKeith M Wesolowski 
6299ab5bb018SKeith M Wesolowski 	return (socketstr);
6300ab5bb018SKeith M Wesolowski }
6301ab5bb018SKeith M Wesolowski 
630222e4c3acSKeith M Wesolowski x86_uarchrev_t
cpuid_getuarchrev(cpu_t * cpu)630322e4c3acSKeith M Wesolowski cpuid_getuarchrev(cpu_t *cpu)
630422e4c3acSKeith M Wesolowski {
630522e4c3acSKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_uarchrev);
630622e4c3acSKeith M Wesolowski }
630722e4c3acSKeith M Wesolowski 
6308ab5bb018SKeith M Wesolowski int
cpuid_get_chipid(cpu_t * cpu)6309ab5bb018SKeith M Wesolowski cpuid_get_chipid(cpu_t *cpu)
6310ab5bb018SKeith M Wesolowski {
6311ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6312ab5bb018SKeith M Wesolowski 
6313ab5bb018SKeith M Wesolowski 	if (cpuid_is_cmt(cpu))
6314ab5bb018SKeith M Wesolowski 		return (cpu->cpu_m.mcpu_cpi->cpi_chipid);
6315ab5bb018SKeith M Wesolowski 	return (cpu->cpu_id);
6316ab5bb018SKeith M Wesolowski }
6317ab5bb018SKeith M Wesolowski 
6318ab5bb018SKeith M Wesolowski id_t
cpuid_get_coreid(cpu_t * cpu)6319ab5bb018SKeith M Wesolowski cpuid_get_coreid(cpu_t *cpu)
6320ab5bb018SKeith M Wesolowski {
6321ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6322ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_coreid);
6323ab5bb018SKeith M Wesolowski }
6324ab5bb018SKeith M Wesolowski 
6325ab5bb018SKeith M Wesolowski int
cpuid_get_pkgcoreid(cpu_t * cpu)6326ab5bb018SKeith M Wesolowski cpuid_get_pkgcoreid(cpu_t *cpu)
6327ab5bb018SKeith M Wesolowski {
6328ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6329ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_pkgcoreid);
6330ab5bb018SKeith M Wesolowski }
6331ab5bb018SKeith M Wesolowski 
6332ab5bb018SKeith M Wesolowski int
cpuid_get_clogid(cpu_t * cpu)6333ab5bb018SKeith M Wesolowski cpuid_get_clogid(cpu_t *cpu)
6334ab5bb018SKeith M Wesolowski {
6335ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6336ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_clogid);
6337ab5bb018SKeith M Wesolowski }
6338ab5bb018SKeith M Wesolowski 
6339ab5bb018SKeith M Wesolowski int
cpuid_get_cacheid(cpu_t * cpu)6340ab5bb018SKeith M Wesolowski cpuid_get_cacheid(cpu_t *cpu)
6341ab5bb018SKeith M Wesolowski {
6342ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6343ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_last_lvl_cacheid);
6344ab5bb018SKeith M Wesolowski }
6345ab5bb018SKeith M Wesolowski 
6346ab5bb018SKeith M Wesolowski uint_t
cpuid_get_procnodeid(cpu_t * cpu)6347ab5bb018SKeith M Wesolowski cpuid_get_procnodeid(cpu_t *cpu)
6348ab5bb018SKeith M Wesolowski {
6349ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6350ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_procnodeid);
6351ab5bb018SKeith M Wesolowski }
6352ab5bb018SKeith M Wesolowski 
6353ab5bb018SKeith M Wesolowski uint_t
cpuid_get_procnodes_per_pkg(cpu_t * cpu)6354ab5bb018SKeith M Wesolowski cpuid_get_procnodes_per_pkg(cpu_t *cpu)
6355ab5bb018SKeith M Wesolowski {
6356ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6357ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_procnodes_per_pkg);
6358ab5bb018SKeith M Wesolowski }
6359ab5bb018SKeith M Wesolowski 
6360ab5bb018SKeith M Wesolowski uint_t
cpuid_get_compunitid(cpu_t * cpu)6361ab5bb018SKeith M Wesolowski cpuid_get_compunitid(cpu_t *cpu)
6362ab5bb018SKeith M Wesolowski {
6363ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6364ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_compunitid);
6365ab5bb018SKeith M Wesolowski }
6366ab5bb018SKeith M Wesolowski 
6367ab5bb018SKeith M Wesolowski uint_t
cpuid_get_cores_per_compunit(cpu_t * cpu)6368ab5bb018SKeith M Wesolowski cpuid_get_cores_per_compunit(cpu_t *cpu)
6369ab5bb018SKeith M Wesolowski {
6370ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6371ab5bb018SKeith M Wesolowski 	return (cpu->cpu_m.mcpu_cpi->cpi_cores_per_compunit);
6372ab5bb018SKeith M Wesolowski }
6373ab5bb018SKeith M Wesolowski 
6374ab5bb018SKeith M Wesolowski uint32_t
cpuid_get_apicid(cpu_t * cpu)6375ab5bb018SKeith M Wesolowski cpuid_get_apicid(cpu_t *cpu)
6376ab5bb018SKeith M Wesolowski {
6377ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6378ab5bb018SKeith M Wesolowski 	if (cpu->cpu_m.mcpu_cpi->cpi_maxeax < 1) {
6379ab5bb018SKeith M Wesolowski 		return (UINT32_MAX);
6380ab5bb018SKeith M Wesolowski 	} else {
6381ab5bb018SKeith M Wesolowski 		return (cpu->cpu_m.mcpu_cpi->cpi_apicid);
6382ab5bb018SKeith M Wesolowski 	}
6383ab5bb018SKeith M Wesolowski }
6384ab5bb018SKeith M Wesolowski 
6385ab5bb018SKeith M Wesolowski void
cpuid_get_addrsize(cpu_t * cpu,uint_t * pabits,uint_t * vabits)6386ab5bb018SKeith M Wesolowski cpuid_get_addrsize(cpu_t *cpu, uint_t *pabits, uint_t *vabits)
6387ab5bb018SKeith M Wesolowski {
6388ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
6389ab5bb018SKeith M Wesolowski 
6390ab5bb018SKeith M Wesolowski 	if (cpu == NULL)
6391ab5bb018SKeith M Wesolowski 		cpu = CPU;
6392ab5bb018SKeith M Wesolowski 	cpi = cpu->cpu_m.mcpu_cpi;
6393ab5bb018SKeith M Wesolowski 
6394ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6395ab5bb018SKeith M Wesolowski 
6396ab5bb018SKeith M Wesolowski 	if (pabits)
6397ab5bb018SKeith M Wesolowski 		*pabits = cpi->cpi_pabits;
6398ab5bb018SKeith M Wesolowski 	if (vabits)
6399ab5bb018SKeith M Wesolowski 		*vabits = cpi->cpi_vabits;
6400ab5bb018SKeith M Wesolowski }
6401ab5bb018SKeith M Wesolowski 
6402ab5bb018SKeith M Wesolowski size_t
cpuid_get_xsave_size(void)6403ed093b41SRobert Mustacchi cpuid_get_xsave_size(void)
6404ab5bb018SKeith M Wesolowski {
6405ab5bb018SKeith M Wesolowski 	return (MAX(cpuid_info0.cpi_xsave.xsav_max_size,
6406ab5bb018SKeith M Wesolowski 	    sizeof (struct xsave_state)));
6407ab5bb018SKeith M Wesolowski }
6408ab5bb018SKeith M Wesolowski 
6409ab5bb018SKeith M Wesolowski /*
6410ed093b41SRobert Mustacchi  * Export information about known offsets to the kernel. We only care about
6411ed093b41SRobert Mustacchi  * things we have actually enabled support for in %xcr0.
6412ed093b41SRobert Mustacchi  */
6413ed093b41SRobert Mustacchi void
cpuid_get_xsave_info(uint64_t bit,size_t * sizep,size_t * offp)6414ed093b41SRobert Mustacchi cpuid_get_xsave_info(uint64_t bit, size_t *sizep, size_t *offp)
6415ed093b41SRobert Mustacchi {
6416ed093b41SRobert Mustacchi 	size_t size, off;
6417ed093b41SRobert Mustacchi 
6418ed093b41SRobert Mustacchi 	VERIFY3U(bit & xsave_bv_all, !=, 0);
6419ed093b41SRobert Mustacchi 
6420ed093b41SRobert Mustacchi 	if (sizep == NULL)
6421ed093b41SRobert Mustacchi 		sizep = &size;
6422ed093b41SRobert Mustacchi 	if (offp == NULL)
6423ed093b41SRobert Mustacchi 		offp = &off;
6424ed093b41SRobert Mustacchi 
6425ed093b41SRobert Mustacchi 	switch (bit) {
6426ed093b41SRobert Mustacchi 	case XFEATURE_LEGACY_FP:
6427ed093b41SRobert Mustacchi 	case XFEATURE_SSE:
6428ed093b41SRobert Mustacchi 		*sizep = sizeof (struct fxsave_state);
6429ed093b41SRobert Mustacchi 		*offp = 0;
6430ed093b41SRobert Mustacchi 		break;
6431ed093b41SRobert Mustacchi 	case XFEATURE_AVX:
6432ed093b41SRobert Mustacchi 		*sizep = cpuid_info0.cpi_xsave.ymm_size;
6433ed093b41SRobert Mustacchi 		*offp = cpuid_info0.cpi_xsave.ymm_offset;
6434ed093b41SRobert Mustacchi 		break;
6435ed093b41SRobert Mustacchi 	case XFEATURE_AVX512_OPMASK:
6436ed093b41SRobert Mustacchi 		*sizep = cpuid_info0.cpi_xsave.opmask_size;
6437ed093b41SRobert Mustacchi 		*offp = cpuid_info0.cpi_xsave.opmask_offset;
6438ed093b41SRobert Mustacchi 		break;
6439ed093b41SRobert Mustacchi 	case XFEATURE_AVX512_ZMM:
6440ed093b41SRobert Mustacchi 		*sizep = cpuid_info0.cpi_xsave.zmmlo_size;
6441ed093b41SRobert Mustacchi 		*offp = cpuid_info0.cpi_xsave.zmmlo_offset;
6442ed093b41SRobert Mustacchi 		break;
6443ed093b41SRobert Mustacchi 	case XFEATURE_AVX512_HI_ZMM:
6444ed093b41SRobert Mustacchi 		*sizep = cpuid_info0.cpi_xsave.zmmhi_size;
6445ed093b41SRobert Mustacchi 		*offp = cpuid_info0.cpi_xsave.zmmhi_offset;
6446ed093b41SRobert Mustacchi 		break;
6447ed093b41SRobert Mustacchi 	default:
6448ed093b41SRobert Mustacchi 		panic("asked for unsupported xsave feature: 0x%lx", bit);
6449ed093b41SRobert Mustacchi 	}
6450ed093b41SRobert Mustacchi }
6451ed093b41SRobert Mustacchi 
6452ed093b41SRobert Mustacchi /*
6453ab5bb018SKeith M Wesolowski  * Return true if the CPUs on this system require 'pointer clearing' for the
6454ab5bb018SKeith M Wesolowski  * floating point error pointer exception handling. In the past, this has been
6455ab5bb018SKeith M Wesolowski  * true for all AMD K7 & K8 CPUs, although newer AMD CPUs have been changed to
6456ab5bb018SKeith M Wesolowski  * behave the same as Intel. This is checked via the CPUID_AMD_EBX_ERR_PTR_ZERO
6457ab5bb018SKeith M Wesolowski  * feature bit and is reflected in the cpi_fp_amd_save member.
6458ab5bb018SKeith M Wesolowski  */
6459ab5bb018SKeith M Wesolowski boolean_t
cpuid_need_fp_excp_handling(void)6460ed093b41SRobert Mustacchi cpuid_need_fp_excp_handling(void)
6461ab5bb018SKeith M Wesolowski {
6462ab5bb018SKeith M Wesolowski 	return (cpuid_info0.cpi_vendor == X86_VENDOR_AMD &&
6463ab5bb018SKeith M Wesolowski 	    cpuid_info0.cpi_fp_amd_save != 0);
6464ab5bb018SKeith M Wesolowski }
6465ab5bb018SKeith M Wesolowski 
6466ab5bb018SKeith M Wesolowski /*
6467ab5bb018SKeith M Wesolowski  * Returns the number of data TLB entries for a corresponding
6468ab5bb018SKeith M Wesolowski  * pagesize.  If it can't be computed, or isn't known, the
6469ab5bb018SKeith M Wesolowski  * routine returns zero.  If you ask about an architecturally
6470ab5bb018SKeith M Wesolowski  * impossible pagesize, the routine will panic (so that the
6471ab5bb018SKeith M Wesolowski  * hat implementor knows that things are inconsistent.)
6472ab5bb018SKeith M Wesolowski  */
6473ab5bb018SKeith M Wesolowski uint_t
cpuid_get_dtlb_nent(cpu_t * cpu,size_t pagesize)6474ab5bb018SKeith M Wesolowski cpuid_get_dtlb_nent(cpu_t *cpu, size_t pagesize)
6475ab5bb018SKeith M Wesolowski {
6476ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
6477ab5bb018SKeith M Wesolowski 	uint_t dtlb_nent = 0;
6478ab5bb018SKeith M Wesolowski 
6479ab5bb018SKeith M Wesolowski 	if (cpu == NULL)
6480ab5bb018SKeith M Wesolowski 		cpu = CPU;
6481ab5bb018SKeith M Wesolowski 	cpi = cpu->cpu_m.mcpu_cpi;
6482ab5bb018SKeith M Wesolowski 
6483ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
6484ab5bb018SKeith M Wesolowski 
6485ab5bb018SKeith M Wesolowski 	/*
6486ab5bb018SKeith M Wesolowski 	 * Check the L2 TLB info
6487ab5bb018SKeith M Wesolowski 	 */
6488ab5bb018SKeith M Wesolowski 	if (cpi->cpi_xmaxeax >= 0x80000006) {
6489ab5bb018SKeith M Wesolowski 		struct cpuid_regs *cp = &cpi->cpi_extd[6];
6490ab5bb018SKeith M Wesolowski 
6491ab5bb018SKeith M Wesolowski 		switch (pagesize) {
6492ab5bb018SKeith M Wesolowski 
6493ab5bb018SKeith M Wesolowski 		case 4 * 1024:
6494ab5bb018SKeith M Wesolowski 			/*
6495ab5bb018SKeith M Wesolowski 			 * All zero in the top 16 bits of the register
6496ab5bb018SKeith M Wesolowski 			 * indicates a unified TLB. Size is in low 16 bits.
6497ab5bb018SKeith M Wesolowski 			 */
6498ab5bb018SKeith M Wesolowski 			if ((cp->cp_ebx & 0xffff0000) == 0)
6499ab5bb018SKeith M Wesolowski 				dtlb_nent = cp->cp_ebx & 0x0000ffff;
6500ab5bb018SKeith M Wesolowski 			else
6501ab5bb018SKeith M Wesolowski 				dtlb_nent = BITX(cp->cp_ebx, 27, 16);
6502ab5bb018SKeith M Wesolowski 			break;
6503ab5bb018SKeith M Wesolowski 
6504ab5bb018SKeith M Wesolowski 		case 2 * 1024 * 1024:
6505ab5bb018SKeith M Wesolowski 			if ((cp->cp_eax & 0xffff0000) == 0)
6506ab5bb018SKeith M Wesolowski 				dtlb_nent = cp->cp_eax & 0x0000ffff;
6507ab5bb018SKeith M Wesolowski 			else
6508ab5bb018SKeith M Wesolowski 				dtlb_nent = BITX(cp->cp_eax, 27, 16);
6509ab5bb018SKeith M Wesolowski 			break;
6510ab5bb018SKeith M Wesolowski 
6511ab5bb018SKeith M Wesolowski 		default:
6512ab5bb018SKeith M Wesolowski 			panic("unknown L2 pagesize");
6513ab5bb018SKeith M Wesolowski 			/*NOTREACHED*/
6514ab5bb018SKeith M Wesolowski 		}
6515ab5bb018SKeith M Wesolowski 	}
6516ab5bb018SKeith M Wesolowski 
6517ab5bb018SKeith M Wesolowski 	if (dtlb_nent != 0)
6518ab5bb018SKeith M Wesolowski 		return (dtlb_nent);
6519ab5bb018SKeith M Wesolowski 
6520ab5bb018SKeith M Wesolowski 	/*
6521ab5bb018SKeith M Wesolowski 	 * No L2 TLB support for this size, try L1.
6522ab5bb018SKeith M Wesolowski 	 */
6523ab5bb018SKeith M Wesolowski 	if (cpi->cpi_xmaxeax >= 0x80000005) {
6524ab5bb018SKeith M Wesolowski 		struct cpuid_regs *cp = &cpi->cpi_extd[5];
6525ab5bb018SKeith M Wesolowski 
6526ab5bb018SKeith M Wesolowski 		switch (pagesize) {
6527ab5bb018SKeith M Wesolowski 		case 4 * 1024:
6528ab5bb018SKeith M Wesolowski 			dtlb_nent = BITX(cp->cp_ebx, 23, 16);
6529ab5bb018SKeith M Wesolowski 			break;
6530ab5bb018SKeith M Wesolowski 		case 2 * 1024 * 1024:
6531ab5bb018SKeith M Wesolowski 			dtlb_nent = BITX(cp->cp_eax, 23, 16);
6532ab5bb018SKeith M Wesolowski 			break;
6533ab5bb018SKeith M Wesolowski 		default:
6534ab5bb018SKeith M Wesolowski 			panic("unknown L1 d-TLB pagesize");
6535ab5bb018SKeith M Wesolowski 			/*NOTREACHED*/
6536ab5bb018SKeith M Wesolowski 		}
6537ab5bb018SKeith M Wesolowski 	}
6538ab5bb018SKeith M Wesolowski 
6539ab5bb018SKeith M Wesolowski 	return (dtlb_nent);
6540ab5bb018SKeith M Wesolowski }
6541ab5bb018SKeith M Wesolowski 
6542ab5bb018SKeith M Wesolowski /*
6543ab5bb018SKeith M Wesolowski  * Return 0 if the erratum is not present or not applicable, positive
6544ab5bb018SKeith M Wesolowski  * if it is, and negative if the status of the erratum is unknown.
6545ab5bb018SKeith M Wesolowski  *
6546ab5bb018SKeith M Wesolowski  * See "Revision Guide for AMD Athlon(tm) 64 and AMD Opteron(tm)
6547ab5bb018SKeith M Wesolowski  * Processors" #25759, Rev 3.57, August 2005
6548ab5bb018SKeith M Wesolowski  */
6549ab5bb018SKeith M Wesolowski int
cpuid_opteron_erratum(cpu_t * cpu,uint_t erratum)6550ab5bb018SKeith M Wesolowski cpuid_opteron_erratum(cpu_t *cpu, uint_t erratum)
6551ab5bb018SKeith M Wesolowski {
6552ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
6553ab5bb018SKeith M Wesolowski 	uint_t eax;
6554ab5bb018SKeith M Wesolowski 
6555ab5bb018SKeith M Wesolowski 	/*
6556ab5bb018SKeith M Wesolowski 	 * Bail out if this CPU isn't an AMD CPU, or if it's
6557ab5bb018SKeith M Wesolowski 	 * a legacy (32-bit) AMD CPU.
6558ab5bb018SKeith M Wesolowski 	 */
6559ab5bb018SKeith M Wesolowski 	if (cpi->cpi_vendor != X86_VENDOR_AMD ||
6560ab5bb018SKeith M Wesolowski 	    cpi->cpi_family == 4 || cpi->cpi_family == 5 ||
6561ab5bb018SKeith M Wesolowski 	    cpi->cpi_family == 6) {
6562ab5bb018SKeith M Wesolowski 		return (0);
6563ab5bb018SKeith M Wesolowski 	}
6564ab5bb018SKeith M Wesolowski 
6565ab5bb018SKeith M Wesolowski 	eax = cpi->cpi_std[1].cp_eax;
6566ab5bb018SKeith M Wesolowski 
6567ab5bb018SKeith M Wesolowski #define	SH_B0(eax)	(eax == 0xf40 || eax == 0xf50)
6568ab5bb018SKeith M Wesolowski #define	SH_B3(eax)	(eax == 0xf51)
6569ab5bb018SKeith M Wesolowski #define	B(eax)		(SH_B0(eax) || SH_B3(eax))
6570ab5bb018SKeith M Wesolowski 
6571ab5bb018SKeith M Wesolowski #define	SH_C0(eax)	(eax == 0xf48 || eax == 0xf58)
6572ab5bb018SKeith M Wesolowski 
6573ab5bb018SKeith M Wesolowski #define	SH_CG(eax)	(eax == 0xf4a || eax == 0xf5a || eax == 0xf7a)
6574ab5bb018SKeith M Wesolowski #define	DH_CG(eax)	(eax == 0xfc0 || eax == 0xfe0 || eax == 0xff0)
6575ab5bb018SKeith M Wesolowski #define	CH_CG(eax)	(eax == 0xf82 || eax == 0xfb2)
6576ab5bb018SKeith M Wesolowski #define	CG(eax)		(SH_CG(eax) || DH_CG(eax) || CH_CG(eax))
6577ab5bb018SKeith M Wesolowski 
6578ab5bb018SKeith M Wesolowski #define	SH_D0(eax)	(eax == 0x10f40 || eax == 0x10f50 || eax == 0x10f70)
6579ab5bb018SKeith M Wesolowski #define	DH_D0(eax)	(eax == 0x10fc0 || eax == 0x10ff0)
6580ab5bb018SKeith M Wesolowski #define	CH_D0(eax)	(eax == 0x10f80 || eax == 0x10fb0)
6581ab5bb018SKeith M Wesolowski #define	D0(eax)		(SH_D0(eax) || DH_D0(eax) || CH_D0(eax))
6582ab5bb018SKeith M Wesolowski 
6583ab5bb018SKeith M Wesolowski #define	SH_E0(eax)	(eax == 0x20f50 || eax == 0x20f40 || eax == 0x20f70)
6584ab5bb018SKeith M Wesolowski #define	JH_E1(eax)	(eax == 0x20f10)	/* JH8_E0 had 0x20f30 */
6585ab5bb018SKeith M Wesolowski #define	DH_E3(eax)	(eax == 0x20fc0 || eax == 0x20ff0)
6586ab5bb018SKeith M Wesolowski #define	SH_E4(eax)	(eax == 0x20f51 || eax == 0x20f71)
6587ab5bb018SKeith M Wesolowski #define	BH_E4(eax)	(eax == 0x20fb1)
6588ab5bb018SKeith M Wesolowski #define	SH_E5(eax)	(eax == 0x20f42)
6589ab5bb018SKeith M Wesolowski #define	DH_E6(eax)	(eax == 0x20ff2 || eax == 0x20fc2)
6590ab5bb018SKeith M Wesolowski #define	JH_E6(eax)	(eax == 0x20f12 || eax == 0x20f32)
6591ab5bb018SKeith M Wesolowski #define	EX(eax)		(SH_E0(eax) || JH_E1(eax) || DH_E3(eax) || \
6592ab5bb018SKeith M Wesolowski 			    SH_E4(eax) || BH_E4(eax) || SH_E5(eax) || \
6593ab5bb018SKeith M Wesolowski 			    DH_E6(eax) || JH_E6(eax))
6594ab5bb018SKeith M Wesolowski 
6595ab5bb018SKeith M Wesolowski #define	DR_AX(eax)	(eax == 0x100f00 || eax == 0x100f01 || eax == 0x100f02)
6596ab5bb018SKeith M Wesolowski #define	DR_B0(eax)	(eax == 0x100f20)
6597ab5bb018SKeith M Wesolowski #define	DR_B1(eax)	(eax == 0x100f21)
6598ab5bb018SKeith M Wesolowski #define	DR_BA(eax)	(eax == 0x100f2a)
6599ab5bb018SKeith M Wesolowski #define	DR_B2(eax)	(eax == 0x100f22)
6600ab5bb018SKeith M Wesolowski #define	DR_B3(eax)	(eax == 0x100f23)
6601ab5bb018SKeith M Wesolowski #define	RB_C0(eax)	(eax == 0x100f40)
6602ab5bb018SKeith M Wesolowski 
6603ab5bb018SKeith M Wesolowski 	switch (erratum) {
6604ab5bb018SKeith M Wesolowski 	case 1:
6605ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family < 0x10);
6606ab5bb018SKeith M Wesolowski 	case 51:	/* what does the asterisk mean? */
6607ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax));
6608ab5bb018SKeith M Wesolowski 	case 52:
6609ab5bb018SKeith M Wesolowski 		return (B(eax));
6610ab5bb018SKeith M Wesolowski 	case 57:
6611ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family <= 0x11);
6612ab5bb018SKeith M Wesolowski 	case 58:
6613ab5bb018SKeith M Wesolowski 		return (B(eax));
6614ab5bb018SKeith M Wesolowski 	case 60:
6615ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family <= 0x11);
6616ab5bb018SKeith M Wesolowski 	case 61:
6617ab5bb018SKeith M Wesolowski 	case 62:
6618ab5bb018SKeith M Wesolowski 	case 63:
6619ab5bb018SKeith M Wesolowski 	case 64:
6620ab5bb018SKeith M Wesolowski 	case 65:
6621ab5bb018SKeith M Wesolowski 	case 66:
6622ab5bb018SKeith M Wesolowski 	case 68:
6623ab5bb018SKeith M Wesolowski 	case 69:
6624ab5bb018SKeith M Wesolowski 	case 70:
6625ab5bb018SKeith M Wesolowski 	case 71:
6626ab5bb018SKeith M Wesolowski 		return (B(eax));
6627ab5bb018SKeith M Wesolowski 	case 72:
6628ab5bb018SKeith M Wesolowski 		return (SH_B0(eax));
6629ab5bb018SKeith M Wesolowski 	case 74:
6630ab5bb018SKeith M Wesolowski 		return (B(eax));
6631ab5bb018SKeith M Wesolowski 	case 75:
6632ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family < 0x10);
6633ab5bb018SKeith M Wesolowski 	case 76:
6634ab5bb018SKeith M Wesolowski 		return (B(eax));
6635ab5bb018SKeith M Wesolowski 	case 77:
6636ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family <= 0x11);
6637ab5bb018SKeith M Wesolowski 	case 78:
6638ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax));
6639ab5bb018SKeith M Wesolowski 	case 79:
6640ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax) || D0(eax) || EX(eax));
6641ab5bb018SKeith M Wesolowski 	case 80:
6642ab5bb018SKeith M Wesolowski 	case 81:
6643ab5bb018SKeith M Wesolowski 	case 82:
6644ab5bb018SKeith M Wesolowski 		return (B(eax));
6645ab5bb018SKeith M Wesolowski 	case 83:
6646ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax));
6647ab5bb018SKeith M Wesolowski 	case 85:
6648ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family < 0x10);
6649ab5bb018SKeith M Wesolowski 	case 86:
6650ab5bb018SKeith M Wesolowski 		return (SH_C0(eax) || CG(eax));
6651ab5bb018SKeith M Wesolowski 	case 88:
6652ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax));
6653ab5bb018SKeith M Wesolowski 	case 89:
6654ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family < 0x10);
6655ab5bb018SKeith M Wesolowski 	case 90:
6656ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax));
6657ab5bb018SKeith M Wesolowski 	case 91:
6658ab5bb018SKeith M Wesolowski 	case 92:
6659ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax));
6660ab5bb018SKeith M Wesolowski 	case 93:
6661ab5bb018SKeith M Wesolowski 		return (SH_C0(eax));
6662ab5bb018SKeith M Wesolowski 	case 94:
6663ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax));
6664ab5bb018SKeith M Wesolowski 	case 95:
6665ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax));
6666ab5bb018SKeith M Wesolowski 	case 96:
6667ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax));
6668ab5bb018SKeith M Wesolowski 	case 97:
6669ab5bb018SKeith M Wesolowski 	case 98:
6670ab5bb018SKeith M Wesolowski 		return (SH_C0(eax) || CG(eax));
6671ab5bb018SKeith M Wesolowski 	case 99:
6672ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax) || D0(eax));
6673ab5bb018SKeith M Wesolowski 	case 100:
6674ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax));
6675ab5bb018SKeith M Wesolowski 	case 101:
6676ab5bb018SKeith M Wesolowski 	case 103:
6677ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax) || D0(eax));
6678ab5bb018SKeith M Wesolowski 	case 104:
6679ab5bb018SKeith M Wesolowski 		return (SH_C0(eax) || CG(eax) || D0(eax));
6680ab5bb018SKeith M Wesolowski 	case 105:
6681ab5bb018SKeith M Wesolowski 	case 106:
6682ab5bb018SKeith M Wesolowski 	case 107:
6683ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax) || D0(eax));
6684ab5bb018SKeith M Wesolowski 	case 108:
6685ab5bb018SKeith M Wesolowski 		return (DH_CG(eax));
6686ab5bb018SKeith M Wesolowski 	case 109:
6687ab5bb018SKeith M Wesolowski 		return (SH_C0(eax) || CG(eax) || D0(eax));
6688ab5bb018SKeith M Wesolowski 	case 110:
6689ab5bb018SKeith M Wesolowski 		return (D0(eax) || EX(eax));
6690ab5bb018SKeith M Wesolowski 	case 111:
6691ab5bb018SKeith M Wesolowski 		return (CG(eax));
6692ab5bb018SKeith M Wesolowski 	case 112:
6693ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax) || D0(eax) || EX(eax));
6694ab5bb018SKeith M Wesolowski 	case 113:
6695ab5bb018SKeith M Wesolowski 		return (eax == 0x20fc0);
6696ab5bb018SKeith M Wesolowski 	case 114:
6697ab5bb018SKeith M Wesolowski 		return (SH_E0(eax) || JH_E1(eax) || DH_E3(eax));
6698ab5bb018SKeith M Wesolowski 	case 115:
6699ab5bb018SKeith M Wesolowski 		return (SH_E0(eax) || JH_E1(eax));
6700ab5bb018SKeith M Wesolowski 	case 116:
6701ab5bb018SKeith M Wesolowski 		return (SH_E0(eax) || JH_E1(eax) || DH_E3(eax));
6702ab5bb018SKeith M Wesolowski 	case 117:
6703ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax) || D0(eax));
6704ab5bb018SKeith M Wesolowski 	case 118:
6705ab5bb018SKeith M Wesolowski 		return (SH_E0(eax) || JH_E1(eax) || SH_E4(eax) || BH_E4(eax) ||
6706ab5bb018SKeith M Wesolowski 		    JH_E6(eax));
6707ab5bb018SKeith M Wesolowski 	case 121:
6708ab5bb018SKeith M Wesolowski 		return (B(eax) || SH_C0(eax) || CG(eax) || D0(eax) || EX(eax));
6709ab5bb018SKeith M Wesolowski 	case 122:
6710ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family < 0x10 || cpi->cpi_family == 0x11);
6711ab5bb018SKeith M Wesolowski 	case 123:
6712ab5bb018SKeith M Wesolowski 		return (JH_E1(eax) || BH_E4(eax) || JH_E6(eax));
6713ab5bb018SKeith M Wesolowski 	case 131:
6714ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family < 0x10);
6715ab5bb018SKeith M Wesolowski 	case 6336786:
6716ab5bb018SKeith M Wesolowski 
6717ab5bb018SKeith M Wesolowski 		/*
6718ab5bb018SKeith M Wesolowski 		 * Test for AdvPowerMgmtInfo.TscPStateInvariant
6719ab5bb018SKeith M Wesolowski 		 * if this is a K8 family or newer processor. We're testing for
6720ab5bb018SKeith M Wesolowski 		 * this 'erratum' to determine whether or not we have a constant
6721ab5bb018SKeith M Wesolowski 		 * TSC.
6722ab5bb018SKeith M Wesolowski 		 *
6723ab5bb018SKeith M Wesolowski 		 * Our current fix for this is to disable the C1-Clock ramping.
6724ab5bb018SKeith M Wesolowski 		 * However, this doesn't work on newer processor families nor
6725ab5bb018SKeith M Wesolowski 		 * does it work when virtualized as those devices don't exist.
6726ab5bb018SKeith M Wesolowski 		 */
6727ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family >= 0x12 || get_hwenv() != HW_NATIVE) {
6728ab5bb018SKeith M Wesolowski 			return (0);
6729ab5bb018SKeith M Wesolowski 		}
6730ab5bb018SKeith M Wesolowski 
6731ab5bb018SKeith M Wesolowski 		if (CPI_FAMILY(cpi) == 0xf) {
6732ab5bb018SKeith M Wesolowski 			struct cpuid_regs regs;
6733ab5bb018SKeith M Wesolowski 			regs.cp_eax = 0x80000007;
6734ab5bb018SKeith M Wesolowski 			(void) __cpuid_insn(&regs);
6735ab5bb018SKeith M Wesolowski 			return (!(regs.cp_edx & 0x100));
6736ab5bb018SKeith M Wesolowski 		}
6737ab5bb018SKeith M Wesolowski 		return (0);
6738ab5bb018SKeith M Wesolowski 	case 147:
6739ab5bb018SKeith M Wesolowski 		/*
6740ab5bb018SKeith M Wesolowski 		 * This erratum (K8 #147) is not present on family 10 and newer.
6741ab5bb018SKeith M Wesolowski 		 */
6742ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family >= 0x10) {
6743ab5bb018SKeith M Wesolowski 			return (0);
6744ab5bb018SKeith M Wesolowski 		}
6745ab5bb018SKeith M Wesolowski 		return (((((eax >> 12) & 0xff00) + (eax & 0xf00)) |
6746ab5bb018SKeith M Wesolowski 		    (((eax >> 4) & 0xf) | ((eax >> 12) & 0xf0))) < 0xf40);
6747ab5bb018SKeith M Wesolowski 
6748ab5bb018SKeith M Wesolowski 	case 6671130:
6749ab5bb018SKeith M Wesolowski 		/*
6750ab5bb018SKeith M Wesolowski 		 * check for processors (pre-Shanghai) that do not provide
6751ab5bb018SKeith M Wesolowski 		 * optimal management of 1gb ptes in its tlb.
6752ab5bb018SKeith M Wesolowski 		 */
6753ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family == 0x10 && cpi->cpi_model < 4);
6754ab5bb018SKeith M Wesolowski 
6755ab5bb018SKeith M Wesolowski 	case 298:
6756ab5bb018SKeith M Wesolowski 		return (DR_AX(eax) || DR_B0(eax) || DR_B1(eax) || DR_BA(eax) ||
6757ab5bb018SKeith M Wesolowski 		    DR_B2(eax) || RB_C0(eax));
6758ab5bb018SKeith M Wesolowski 
6759ab5bb018SKeith M Wesolowski 	case 721:
6760ab5bb018SKeith M Wesolowski 		return (cpi->cpi_family == 0x10 || cpi->cpi_family == 0x12);
6761ab5bb018SKeith M Wesolowski 
6762ab5bb018SKeith M Wesolowski 	default:
6763ab5bb018SKeith M Wesolowski 		return (-1);
6764ab5bb018SKeith M Wesolowski 
6765ab5bb018SKeith M Wesolowski 	}
6766ab5bb018SKeith M Wesolowski }
6767ab5bb018SKeith M Wesolowski 
6768ab5bb018SKeith M Wesolowski /*
6769ab5bb018SKeith M Wesolowski  * Determine if specified erratum is present via OSVW (OS Visible Workaround).
6770ab5bb018SKeith M Wesolowski  * Return 1 if erratum is present, 0 if not present and -1 if indeterminate.
6771ab5bb018SKeith M Wesolowski  */
6772ab5bb018SKeith M Wesolowski int
osvw_opteron_erratum(cpu_t * cpu,uint_t erratum)6773ab5bb018SKeith M Wesolowski osvw_opteron_erratum(cpu_t *cpu, uint_t erratum)
6774ab5bb018SKeith M Wesolowski {
6775ab5bb018SKeith M Wesolowski 	struct cpuid_info	*cpi;
6776ab5bb018SKeith M Wesolowski 	uint_t			osvwid;
6777ab5bb018SKeith M Wesolowski 	static int		osvwfeature = -1;
6778ab5bb018SKeith M Wesolowski 	uint64_t		osvwlength;
6779ab5bb018SKeith M Wesolowski 
6780ab5bb018SKeith M Wesolowski 
6781ab5bb018SKeith M Wesolowski 	cpi = cpu->cpu_m.mcpu_cpi;
6782ab5bb018SKeith M Wesolowski 
6783ab5bb018SKeith M Wesolowski 	/* confirm OSVW supported */
6784ab5bb018SKeith M Wesolowski 	if (osvwfeature == -1) {
6785ab5bb018SKeith M Wesolowski 		osvwfeature = cpi->cpi_extd[1].cp_ecx & CPUID_AMD_ECX_OSVW;
6786ab5bb018SKeith M Wesolowski 	} else {
6787ab5bb018SKeith M Wesolowski 		/* assert that osvw feature setting is consistent on all cpus */
6788ab5bb018SKeith M Wesolowski 		ASSERT(osvwfeature ==
6789ab5bb018SKeith M Wesolowski 		    (cpi->cpi_extd[1].cp_ecx & CPUID_AMD_ECX_OSVW));
6790ab5bb018SKeith M Wesolowski 	}
6791ab5bb018SKeith M Wesolowski 	if (!osvwfeature)
6792ab5bb018SKeith M Wesolowski 		return (-1);
6793ab5bb018SKeith M Wesolowski 
6794ab5bb018SKeith M Wesolowski 	osvwlength = rdmsr(MSR_AMD_OSVW_ID_LEN) & OSVW_ID_LEN_MASK;
6795ab5bb018SKeith M Wesolowski 
6796ab5bb018SKeith M Wesolowski 	switch (erratum) {
6797ab5bb018SKeith M Wesolowski 	case 298:	/* osvwid is 0 */
6798ab5bb018SKeith M Wesolowski 		osvwid = 0;
6799ab5bb018SKeith M Wesolowski 		if (osvwlength <= (uint64_t)osvwid) {
6800ab5bb018SKeith M Wesolowski 			/* osvwid 0 is unknown */
6801ab5bb018SKeith M Wesolowski 			return (-1);
6802ab5bb018SKeith M Wesolowski 		}
6803ab5bb018SKeith M Wesolowski 
6804ab5bb018SKeith M Wesolowski 		/*
6805ab5bb018SKeith M Wesolowski 		 * Check the OSVW STATUS MSR to determine the state
6806ab5bb018SKeith M Wesolowski 		 * of the erratum where:
6807ab5bb018SKeith M Wesolowski 		 *   0 - fixed by HW
6808ab5bb018SKeith M Wesolowski 		 *   1 - BIOS has applied the workaround when BIOS
6809ab5bb018SKeith M Wesolowski 		 *   workaround is available. (Or for other errata,
6810ab5bb018SKeith M Wesolowski 		 *   OS workaround is required.)
6811ab5bb018SKeith M Wesolowski 		 * For a value of 1, caller will confirm that the
6812ab5bb018SKeith M Wesolowski 		 * erratum 298 workaround has indeed been applied by BIOS.
6813ab5bb018SKeith M Wesolowski 		 *
6814ab5bb018SKeith M Wesolowski 		 * A 1 may be set in cpus that have a HW fix
6815ab5bb018SKeith M Wesolowski 		 * in a mixed cpu system. Regarding erratum 298:
6816ab5bb018SKeith M Wesolowski 		 *   In a multiprocessor platform, the workaround above
6817ab5bb018SKeith M Wesolowski 		 *   should be applied to all processors regardless of
6818ab5bb018SKeith M Wesolowski 		 *   silicon revision when an affected processor is
6819ab5bb018SKeith M Wesolowski 		 *   present.
6820ab5bb018SKeith M Wesolowski 		 */
6821ab5bb018SKeith M Wesolowski 
6822ab5bb018SKeith M Wesolowski 		return (rdmsr(MSR_AMD_OSVW_STATUS +
6823ab5bb018SKeith M Wesolowski 		    (osvwid / OSVW_ID_CNT_PER_MSR)) &
6824ab5bb018SKeith M Wesolowski 		    (1ULL << (osvwid % OSVW_ID_CNT_PER_MSR)));
6825ab5bb018SKeith M Wesolowski 
6826ab5bb018SKeith M Wesolowski 	default:
6827ab5bb018SKeith M Wesolowski 		return (-1);
6828ab5bb018SKeith M Wesolowski 	}
6829ab5bb018SKeith M Wesolowski }
6830ab5bb018SKeith M Wesolowski 
6831ab5bb018SKeith M Wesolowski static const char assoc_str[] = "associativity";
6832ab5bb018SKeith M Wesolowski static const char line_str[] = "line-size";
6833ab5bb018SKeith M Wesolowski static const char size_str[] = "size";
6834ab5bb018SKeith M Wesolowski 
6835ab5bb018SKeith M Wesolowski static void
add_cache_prop(dev_info_t * devi,const char * label,const char * type,uint32_t val)6836ab5bb018SKeith M Wesolowski add_cache_prop(dev_info_t *devi, const char *label, const char *type,
6837ab5bb018SKeith M Wesolowski     uint32_t val)
6838ab5bb018SKeith M Wesolowski {
6839ab5bb018SKeith M Wesolowski 	char buf[128];
6840ab5bb018SKeith M Wesolowski 
6841ab5bb018SKeith M Wesolowski 	/*
6842ab5bb018SKeith M Wesolowski 	 * ndi_prop_update_int() is used because it is desirable for
6843ab5bb018SKeith M Wesolowski 	 * DDI_PROP_HW_DEF and DDI_PROP_DONTSLEEP to be set.
6844ab5bb018SKeith M Wesolowski 	 */
6845ab5bb018SKeith M Wesolowski 	if (snprintf(buf, sizeof (buf), "%s-%s", label, type) < sizeof (buf))
6846ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, devi, buf, val);
6847ab5bb018SKeith M Wesolowski }
6848ab5bb018SKeith M Wesolowski 
6849ab5bb018SKeith M Wesolowski /*
6850ab5bb018SKeith M Wesolowski  * Intel-style cache/tlb description
6851ab5bb018SKeith M Wesolowski  *
6852ab5bb018SKeith M Wesolowski  * Standard cpuid level 2 gives a randomly ordered
6853ab5bb018SKeith M Wesolowski  * selection of tags that index into a table that describes
6854ab5bb018SKeith M Wesolowski  * cache and tlb properties.
6855ab5bb018SKeith M Wesolowski  */
6856ab5bb018SKeith M Wesolowski 
6857ab5bb018SKeith M Wesolowski static const char l1_icache_str[] = "l1-icache";
6858ab5bb018SKeith M Wesolowski static const char l1_dcache_str[] = "l1-dcache";
6859ab5bb018SKeith M Wesolowski static const char l2_cache_str[] = "l2-cache";
6860ab5bb018SKeith M Wesolowski static const char l3_cache_str[] = "l3-cache";
6861ab5bb018SKeith M Wesolowski static const char itlb4k_str[] = "itlb-4K";
6862ab5bb018SKeith M Wesolowski static const char dtlb4k_str[] = "dtlb-4K";
6863ab5bb018SKeith M Wesolowski static const char itlb2M_str[] = "itlb-2M";
6864ab5bb018SKeith M Wesolowski static const char itlb4M_str[] = "itlb-4M";
6865ab5bb018SKeith M Wesolowski static const char dtlb4M_str[] = "dtlb-4M";
6866ab5bb018SKeith M Wesolowski static const char dtlb24_str[] = "dtlb0-2M-4M";
6867ab5bb018SKeith M Wesolowski static const char itlb424_str[] = "itlb-4K-2M-4M";
6868ab5bb018SKeith M Wesolowski static const char itlb24_str[] = "itlb-2M-4M";
6869ab5bb018SKeith M Wesolowski static const char dtlb44_str[] = "dtlb-4K-4M";
6870ab5bb018SKeith M Wesolowski static const char sl1_dcache_str[] = "sectored-l1-dcache";
6871ab5bb018SKeith M Wesolowski static const char sl2_cache_str[] = "sectored-l2-cache";
6872ab5bb018SKeith M Wesolowski static const char itrace_str[] = "itrace-cache";
6873ab5bb018SKeith M Wesolowski static const char sl3_cache_str[] = "sectored-l3-cache";
6874ab5bb018SKeith M Wesolowski static const char sh_l2_tlb4k_str[] = "shared-l2-tlb-4k";
6875ab5bb018SKeith M Wesolowski 
6876ab5bb018SKeith M Wesolowski static const struct cachetab {
6877ab5bb018SKeith M Wesolowski 	uint8_t		ct_code;
6878ab5bb018SKeith M Wesolowski 	uint8_t		ct_assoc;
6879ab5bb018SKeith M Wesolowski 	uint16_t	ct_line_size;
6880ab5bb018SKeith M Wesolowski 	size_t		ct_size;
6881ab5bb018SKeith M Wesolowski 	const char	*ct_label;
6882ab5bb018SKeith M Wesolowski } intel_ctab[] = {
6883ab5bb018SKeith M Wesolowski 	/*
6884ab5bb018SKeith M Wesolowski 	 * maintain descending order!
6885ab5bb018SKeith M Wesolowski 	 *
6886ab5bb018SKeith M Wesolowski 	 * Codes ignored - Reason
6887ab5bb018SKeith M Wesolowski 	 * ----------------------
6888ab5bb018SKeith M Wesolowski 	 * 40H - intel_cpuid_4_cache_info() disambiguates l2/l3 cache
6889ab5bb018SKeith M Wesolowski 	 * f0H/f1H - Currently we do not interpret prefetch size by design
6890ab5bb018SKeith M Wesolowski 	 */
6891ab5bb018SKeith M Wesolowski 	{ 0xe4, 16, 64, 8*1024*1024, l3_cache_str},
6892ab5bb018SKeith M Wesolowski 	{ 0xe3, 16, 64, 4*1024*1024, l3_cache_str},
6893ab5bb018SKeith M Wesolowski 	{ 0xe2, 16, 64, 2*1024*1024, l3_cache_str},
6894ab5bb018SKeith M Wesolowski 	{ 0xde, 12, 64, 6*1024*1024, l3_cache_str},
6895ab5bb018SKeith M Wesolowski 	{ 0xdd, 12, 64, 3*1024*1024, l3_cache_str},
6896ab5bb018SKeith M Wesolowski 	{ 0xdc, 12, 64, ((1*1024*1024)+(512*1024)), l3_cache_str},
6897ab5bb018SKeith M Wesolowski 	{ 0xd8, 8, 64, 4*1024*1024, l3_cache_str},
6898ab5bb018SKeith M Wesolowski 	{ 0xd7, 8, 64, 2*1024*1024, l3_cache_str},
6899ab5bb018SKeith M Wesolowski 	{ 0xd6, 8, 64, 1*1024*1024, l3_cache_str},
6900ab5bb018SKeith M Wesolowski 	{ 0xd2, 4, 64, 2*1024*1024, l3_cache_str},
6901ab5bb018SKeith M Wesolowski 	{ 0xd1, 4, 64, 1*1024*1024, l3_cache_str},
6902ab5bb018SKeith M Wesolowski 	{ 0xd0, 4, 64, 512*1024, l3_cache_str},
6903ab5bb018SKeith M Wesolowski 	{ 0xca, 4, 0, 512, sh_l2_tlb4k_str},
6904ab5bb018SKeith M Wesolowski 	{ 0xc0, 4, 0, 8, dtlb44_str },
6905ab5bb018SKeith M Wesolowski 	{ 0xba, 4, 0, 64, dtlb4k_str },
6906ab5bb018SKeith M Wesolowski 	{ 0xb4, 4, 0, 256, dtlb4k_str },
6907ab5bb018SKeith M Wesolowski 	{ 0xb3, 4, 0, 128, dtlb4k_str },
6908ab5bb018SKeith M Wesolowski 	{ 0xb2, 4, 0, 64, itlb4k_str },
6909ab5bb018SKeith M Wesolowski 	{ 0xb0, 4, 0, 128, itlb4k_str },
6910ab5bb018SKeith M Wesolowski 	{ 0x87, 8, 64, 1024*1024, l2_cache_str},
6911ab5bb018SKeith M Wesolowski 	{ 0x86, 4, 64, 512*1024, l2_cache_str},
6912ab5bb018SKeith M Wesolowski 	{ 0x85, 8, 32, 2*1024*1024, l2_cache_str},
6913ab5bb018SKeith M Wesolowski 	{ 0x84, 8, 32, 1024*1024, l2_cache_str},
6914ab5bb018SKeith M Wesolowski 	{ 0x83, 8, 32, 512*1024, l2_cache_str},
6915ab5bb018SKeith M Wesolowski 	{ 0x82, 8, 32, 256*1024, l2_cache_str},
6916ab5bb018SKeith M Wesolowski 	{ 0x80, 8, 64, 512*1024, l2_cache_str},
6917ab5bb018SKeith M Wesolowski 	{ 0x7f, 2, 64, 512*1024, l2_cache_str},
6918ab5bb018SKeith M Wesolowski 	{ 0x7d, 8, 64, 2*1024*1024, sl2_cache_str},
6919ab5bb018SKeith M Wesolowski 	{ 0x7c, 8, 64, 1024*1024, sl2_cache_str},
6920ab5bb018SKeith M Wesolowski 	{ 0x7b, 8, 64, 512*1024, sl2_cache_str},
6921ab5bb018SKeith M Wesolowski 	{ 0x7a, 8, 64, 256*1024, sl2_cache_str},
6922ab5bb018SKeith M Wesolowski 	{ 0x79, 8, 64, 128*1024, sl2_cache_str},
6923ab5bb018SKeith M Wesolowski 	{ 0x78, 8, 64, 1024*1024, l2_cache_str},
6924ab5bb018SKeith M Wesolowski 	{ 0x73, 8, 0, 64*1024, itrace_str},
6925ab5bb018SKeith M Wesolowski 	{ 0x72, 8, 0, 32*1024, itrace_str},
6926ab5bb018SKeith M Wesolowski 	{ 0x71, 8, 0, 16*1024, itrace_str},
6927ab5bb018SKeith M Wesolowski 	{ 0x70, 8, 0, 12*1024, itrace_str},
6928ab5bb018SKeith M Wesolowski 	{ 0x68, 4, 64, 32*1024, sl1_dcache_str},
6929ab5bb018SKeith M Wesolowski 	{ 0x67, 4, 64, 16*1024, sl1_dcache_str},
6930ab5bb018SKeith M Wesolowski 	{ 0x66, 4, 64, 8*1024, sl1_dcache_str},
6931ab5bb018SKeith M Wesolowski 	{ 0x60, 8, 64, 16*1024, sl1_dcache_str},
6932ab5bb018SKeith M Wesolowski 	{ 0x5d, 0, 0, 256, dtlb44_str},
6933ab5bb018SKeith M Wesolowski 	{ 0x5c, 0, 0, 128, dtlb44_str},
6934ab5bb018SKeith M Wesolowski 	{ 0x5b, 0, 0, 64, dtlb44_str},
6935ab5bb018SKeith M Wesolowski 	{ 0x5a, 4, 0, 32, dtlb24_str},
6936ab5bb018SKeith M Wesolowski 	{ 0x59, 0, 0, 16, dtlb4k_str},
6937ab5bb018SKeith M Wesolowski 	{ 0x57, 4, 0, 16, dtlb4k_str},
6938ab5bb018SKeith M Wesolowski 	{ 0x56, 4, 0, 16, dtlb4M_str},
6939ab5bb018SKeith M Wesolowski 	{ 0x55, 0, 0, 7, itlb24_str},
6940ab5bb018SKeith M Wesolowski 	{ 0x52, 0, 0, 256, itlb424_str},
6941ab5bb018SKeith M Wesolowski 	{ 0x51, 0, 0, 128, itlb424_str},
6942ab5bb018SKeith M Wesolowski 	{ 0x50, 0, 0, 64, itlb424_str},
6943ab5bb018SKeith M Wesolowski 	{ 0x4f, 0, 0, 32, itlb4k_str},
6944ab5bb018SKeith M Wesolowski 	{ 0x4e, 24, 64, 6*1024*1024, l2_cache_str},
6945ab5bb018SKeith M Wesolowski 	{ 0x4d, 16, 64, 16*1024*1024, l3_cache_str},
6946ab5bb018SKeith M Wesolowski 	{ 0x4c, 12, 64, 12*1024*1024, l3_cache_str},
6947ab5bb018SKeith M Wesolowski 	{ 0x4b, 16, 64, 8*1024*1024, l3_cache_str},
6948ab5bb018SKeith M Wesolowski 	{ 0x4a, 12, 64, 6*1024*1024, l3_cache_str},
6949ab5bb018SKeith M Wesolowski 	{ 0x49, 16, 64, 4*1024*1024, l3_cache_str},
6950ab5bb018SKeith M Wesolowski 	{ 0x48, 12, 64, 3*1024*1024, l2_cache_str},
6951ab5bb018SKeith M Wesolowski 	{ 0x47, 8, 64, 8*1024*1024, l3_cache_str},
6952ab5bb018SKeith M Wesolowski 	{ 0x46, 4, 64, 4*1024*1024, l3_cache_str},
6953ab5bb018SKeith M Wesolowski 	{ 0x45, 4, 32, 2*1024*1024, l2_cache_str},
6954ab5bb018SKeith M Wesolowski 	{ 0x44, 4, 32, 1024*1024, l2_cache_str},
6955ab5bb018SKeith M Wesolowski 	{ 0x43, 4, 32, 512*1024, l2_cache_str},
6956ab5bb018SKeith M Wesolowski 	{ 0x42, 4, 32, 256*1024, l2_cache_str},
6957ab5bb018SKeith M Wesolowski 	{ 0x41, 4, 32, 128*1024, l2_cache_str},
6958ab5bb018SKeith M Wesolowski 	{ 0x3e, 4, 64, 512*1024, sl2_cache_str},
6959ab5bb018SKeith M Wesolowski 	{ 0x3d, 6, 64, 384*1024, sl2_cache_str},
6960ab5bb018SKeith M Wesolowski 	{ 0x3c, 4, 64, 256*1024, sl2_cache_str},
6961ab5bb018SKeith M Wesolowski 	{ 0x3b, 2, 64, 128*1024, sl2_cache_str},
6962ab5bb018SKeith M Wesolowski 	{ 0x3a, 6, 64, 192*1024, sl2_cache_str},
6963ab5bb018SKeith M Wesolowski 	{ 0x39, 4, 64, 128*1024, sl2_cache_str},
6964ab5bb018SKeith M Wesolowski 	{ 0x30, 8, 64, 32*1024, l1_icache_str},
6965ab5bb018SKeith M Wesolowski 	{ 0x2c, 8, 64, 32*1024, l1_dcache_str},
6966ab5bb018SKeith M Wesolowski 	{ 0x29, 8, 64, 4096*1024, sl3_cache_str},
6967ab5bb018SKeith M Wesolowski 	{ 0x25, 8, 64, 2048*1024, sl3_cache_str},
6968ab5bb018SKeith M Wesolowski 	{ 0x23, 8, 64, 1024*1024, sl3_cache_str},
6969ab5bb018SKeith M Wesolowski 	{ 0x22, 4, 64, 512*1024, sl3_cache_str},
6970ab5bb018SKeith M Wesolowski 	{ 0x0e, 6, 64, 24*1024, l1_dcache_str},
6971ab5bb018SKeith M Wesolowski 	{ 0x0d, 4, 32, 16*1024, l1_dcache_str},
6972ab5bb018SKeith M Wesolowski 	{ 0x0c, 4, 32, 16*1024, l1_dcache_str},
6973ab5bb018SKeith M Wesolowski 	{ 0x0b, 4, 0, 4, itlb4M_str},
6974ab5bb018SKeith M Wesolowski 	{ 0x0a, 2, 32, 8*1024, l1_dcache_str},
6975ab5bb018SKeith M Wesolowski 	{ 0x08, 4, 32, 16*1024, l1_icache_str},
6976ab5bb018SKeith M Wesolowski 	{ 0x06, 4, 32, 8*1024, l1_icache_str},
6977ab5bb018SKeith M Wesolowski 	{ 0x05, 4, 0, 32, dtlb4M_str},
6978ab5bb018SKeith M Wesolowski 	{ 0x04, 4, 0, 8, dtlb4M_str},
6979ab5bb018SKeith M Wesolowski 	{ 0x03, 4, 0, 64, dtlb4k_str},
6980ab5bb018SKeith M Wesolowski 	{ 0x02, 4, 0, 2, itlb4M_str},
6981ab5bb018SKeith M Wesolowski 	{ 0x01, 4, 0, 32, itlb4k_str},
6982ab5bb018SKeith M Wesolowski 	{ 0 }
6983ab5bb018SKeith M Wesolowski };
6984ab5bb018SKeith M Wesolowski 
6985ab5bb018SKeith M Wesolowski static const struct cachetab cyrix_ctab[] = {
6986ab5bb018SKeith M Wesolowski 	{ 0x70, 4, 0, 32, "tlb-4K" },
6987ab5bb018SKeith M Wesolowski 	{ 0x80, 4, 16, 16*1024, "l1-cache" },
6988ab5bb018SKeith M Wesolowski 	{ 0 }
6989ab5bb018SKeith M Wesolowski };
6990ab5bb018SKeith M Wesolowski 
6991ab5bb018SKeith M Wesolowski /*
6992ab5bb018SKeith M Wesolowski  * Search a cache table for a matching entry
6993ab5bb018SKeith M Wesolowski  */
6994ab5bb018SKeith M Wesolowski static const struct cachetab *
find_cacheent(const struct cachetab * ct,uint_t code)6995ab5bb018SKeith M Wesolowski find_cacheent(const struct cachetab *ct, uint_t code)
6996ab5bb018SKeith M Wesolowski {
6997ab5bb018SKeith M Wesolowski 	if (code != 0) {
6998ab5bb018SKeith M Wesolowski 		for (; ct->ct_code != 0; ct++)
6999ab5bb018SKeith M Wesolowski 			if (ct->ct_code <= code)
7000ab5bb018SKeith M Wesolowski 				break;
7001ab5bb018SKeith M Wesolowski 		if (ct->ct_code == code)
7002ab5bb018SKeith M Wesolowski 			return (ct);
7003ab5bb018SKeith M Wesolowski 	}
7004ab5bb018SKeith M Wesolowski 	return (NULL);
7005ab5bb018SKeith M Wesolowski }
7006ab5bb018SKeith M Wesolowski 
7007ab5bb018SKeith M Wesolowski /*
7008ab5bb018SKeith M Wesolowski  * Populate cachetab entry with L2 or L3 cache-information using
7009ab5bb018SKeith M Wesolowski  * cpuid function 4. This function is called from intel_walk_cacheinfo()
7010ab5bb018SKeith M Wesolowski  * when descriptor 0x49 is encountered. It returns 0 if no such cache
7011ab5bb018SKeith M Wesolowski  * information is found.
7012ab5bb018SKeith M Wesolowski  */
7013ab5bb018SKeith M Wesolowski static int
intel_cpuid_4_cache_info(struct cachetab * ct,struct cpuid_info * cpi)7014ab5bb018SKeith M Wesolowski intel_cpuid_4_cache_info(struct cachetab *ct, struct cpuid_info *cpi)
7015ab5bb018SKeith M Wesolowski {
7016ab5bb018SKeith M Wesolowski 	uint32_t level, i;
7017ab5bb018SKeith M Wesolowski 	int ret = 0;
7018ab5bb018SKeith M Wesolowski 
7019ab5bb018SKeith M Wesolowski 	for (i = 0; i < cpi->cpi_cache_leaf_size; i++) {
7020ab5bb018SKeith M Wesolowski 		level = CPI_CACHE_LVL(cpi->cpi_cache_leaves[i]);
7021ab5bb018SKeith M Wesolowski 
7022ab5bb018SKeith M Wesolowski 		if (level == 2 || level == 3) {
7023ab5bb018SKeith M Wesolowski 			ct->ct_assoc =
7024ab5bb018SKeith M Wesolowski 			    CPI_CACHE_WAYS(cpi->cpi_cache_leaves[i]) + 1;
7025ab5bb018SKeith M Wesolowski 			ct->ct_line_size =
7026ab5bb018SKeith M Wesolowski 			    CPI_CACHE_COH_LN_SZ(cpi->cpi_cache_leaves[i]) + 1;
7027ab5bb018SKeith M Wesolowski 			ct->ct_size = ct->ct_assoc *
7028ab5bb018SKeith M Wesolowski 			    (CPI_CACHE_PARTS(cpi->cpi_cache_leaves[i]) + 1) *
7029ab5bb018SKeith M Wesolowski 			    ct->ct_line_size *
7030ab5bb018SKeith M Wesolowski 			    (cpi->cpi_cache_leaves[i]->cp_ecx + 1);
7031ab5bb018SKeith M Wesolowski 
7032ab5bb018SKeith M Wesolowski 			if (level == 2) {
7033ab5bb018SKeith M Wesolowski 				ct->ct_label = l2_cache_str;
7034ab5bb018SKeith M Wesolowski 			} else if (level == 3) {
7035ab5bb018SKeith M Wesolowski 				ct->ct_label = l3_cache_str;
7036ab5bb018SKeith M Wesolowski 			}
7037ab5bb018SKeith M Wesolowski 			ret = 1;
7038ab5bb018SKeith M Wesolowski 		}
7039ab5bb018SKeith M Wesolowski 	}
7040ab5bb018SKeith M Wesolowski 
7041ab5bb018SKeith M Wesolowski 	return (ret);
7042ab5bb018SKeith M Wesolowski }
7043ab5bb018SKeith M Wesolowski 
7044ab5bb018SKeith M Wesolowski /*
7045ab5bb018SKeith M Wesolowski  * Walk the cacheinfo descriptor, applying 'func' to every valid element
7046ab5bb018SKeith M Wesolowski  * The walk is terminated if the walker returns non-zero.
7047ab5bb018SKeith M Wesolowski  */
7048ab5bb018SKeith M Wesolowski static void
intel_walk_cacheinfo(struct cpuid_info * cpi,void * arg,int (* func)(void *,const struct cachetab *))7049ab5bb018SKeith M Wesolowski intel_walk_cacheinfo(struct cpuid_info *cpi,
7050ab5bb018SKeith M Wesolowski     void *arg, int (*func)(void *, const struct cachetab *))
7051ab5bb018SKeith M Wesolowski {
7052ab5bb018SKeith M Wesolowski 	const struct cachetab *ct;
7053ab5bb018SKeith M Wesolowski 	struct cachetab des_49_ct, des_b1_ct;
7054ab5bb018SKeith M Wesolowski 	uint8_t *dp;
7055ab5bb018SKeith M Wesolowski 	int i;
7056ab5bb018SKeith M Wesolowski 
7057ab5bb018SKeith M Wesolowski 	if ((dp = cpi->cpi_cacheinfo) == NULL)
7058ab5bb018SKeith M Wesolowski 		return;
7059ab5bb018SKeith M Wesolowski 	for (i = 0; i < cpi->cpi_ncache; i++, dp++) {
7060ab5bb018SKeith M Wesolowski 		/*
7061ab5bb018SKeith M Wesolowski 		 * For overloaded descriptor 0x49 we use cpuid function 4
7062ab5bb018SKeith M Wesolowski 		 * if supported by the current processor, to create
7063ab5bb018SKeith M Wesolowski 		 * cache information.
7064ab5bb018SKeith M Wesolowski 		 * For overloaded descriptor 0xb1 we use X86_PAE flag
7065ab5bb018SKeith M Wesolowski 		 * to disambiguate the cache information.
7066ab5bb018SKeith M Wesolowski 		 */
7067ab5bb018SKeith M Wesolowski 		if (*dp == 0x49 && cpi->cpi_maxeax >= 0x4 &&
7068ab5bb018SKeith M Wesolowski 		    intel_cpuid_4_cache_info(&des_49_ct, cpi) == 1) {
7069ab5bb018SKeith M Wesolowski 				ct = &des_49_ct;
7070ab5bb018SKeith M Wesolowski 		} else if (*dp == 0xb1) {
7071ab5bb018SKeith M Wesolowski 			des_b1_ct.ct_code = 0xb1;
7072ab5bb018SKeith M Wesolowski 			des_b1_ct.ct_assoc = 4;
7073ab5bb018SKeith M Wesolowski 			des_b1_ct.ct_line_size = 0;
7074ab5bb018SKeith M Wesolowski 			if (is_x86_feature(x86_featureset, X86FSET_PAE)) {
7075ab5bb018SKeith M Wesolowski 				des_b1_ct.ct_size = 8;
7076ab5bb018SKeith M Wesolowski 				des_b1_ct.ct_label = itlb2M_str;
7077ab5bb018SKeith M Wesolowski 			} else {
7078ab5bb018SKeith M Wesolowski 				des_b1_ct.ct_size = 4;
7079ab5bb018SKeith M Wesolowski 				des_b1_ct.ct_label = itlb4M_str;
7080ab5bb018SKeith M Wesolowski 			}
7081ab5bb018SKeith M Wesolowski 			ct = &des_b1_ct;
7082ab5bb018SKeith M Wesolowski 		} else {
7083ab5bb018SKeith M Wesolowski 			if ((ct = find_cacheent(intel_ctab, *dp)) == NULL) {
7084ab5bb018SKeith M Wesolowski 				continue;
7085ab5bb018SKeith M Wesolowski 			}
7086ab5bb018SKeith M Wesolowski 		}
7087ab5bb018SKeith M Wesolowski 
7088ab5bb018SKeith M Wesolowski 		if (func(arg, ct) != 0) {
7089ab5bb018SKeith M Wesolowski 			break;
7090ab5bb018SKeith M Wesolowski 		}
7091ab5bb018SKeith M Wesolowski 	}
7092ab5bb018SKeith M Wesolowski }
7093ab5bb018SKeith M Wesolowski 
7094ab5bb018SKeith M Wesolowski /*
7095ab5bb018SKeith M Wesolowski  * (Like the Intel one, except for Cyrix CPUs)
7096ab5bb018SKeith M Wesolowski  */
7097ab5bb018SKeith M Wesolowski static void
cyrix_walk_cacheinfo(struct cpuid_info * cpi,void * arg,int (* func)(void *,const struct cachetab *))7098ab5bb018SKeith M Wesolowski cyrix_walk_cacheinfo(struct cpuid_info *cpi,
7099ab5bb018SKeith M Wesolowski     void *arg, int (*func)(void *, const struct cachetab *))
7100ab5bb018SKeith M Wesolowski {
7101ab5bb018SKeith M Wesolowski 	const struct cachetab *ct;
7102ab5bb018SKeith M Wesolowski 	uint8_t *dp;
7103ab5bb018SKeith M Wesolowski 	int i;
7104ab5bb018SKeith M Wesolowski 
7105ab5bb018SKeith M Wesolowski 	if ((dp = cpi->cpi_cacheinfo) == NULL)
7106ab5bb018SKeith M Wesolowski 		return;
7107ab5bb018SKeith M Wesolowski 	for (i = 0; i < cpi->cpi_ncache; i++, dp++) {
7108ab5bb018SKeith M Wesolowski 		/*
7109ab5bb018SKeith M Wesolowski 		 * Search Cyrix-specific descriptor table first ..
7110ab5bb018SKeith M Wesolowski 		 */
7111ab5bb018SKeith M Wesolowski 		if ((ct = find_cacheent(cyrix_ctab, *dp)) != NULL) {
7112ab5bb018SKeith M Wesolowski 			if (func(arg, ct) != 0)
7113ab5bb018SKeith M Wesolowski 				break;
7114ab5bb018SKeith M Wesolowski 			continue;
7115ab5bb018SKeith M Wesolowski 		}
7116ab5bb018SKeith M Wesolowski 		/*
7117ab5bb018SKeith M Wesolowski 		 * .. else fall back to the Intel one
7118ab5bb018SKeith M Wesolowski 		 */
7119ab5bb018SKeith M Wesolowski 		if ((ct = find_cacheent(intel_ctab, *dp)) != NULL) {
7120ab5bb018SKeith M Wesolowski 			if (func(arg, ct) != 0)
7121ab5bb018SKeith M Wesolowski 				break;
7122ab5bb018SKeith M Wesolowski 			continue;
7123ab5bb018SKeith M Wesolowski 		}
7124ab5bb018SKeith M Wesolowski 	}
7125ab5bb018SKeith M Wesolowski }
7126ab5bb018SKeith M Wesolowski 
7127ab5bb018SKeith M Wesolowski /*
7128ab5bb018SKeith M Wesolowski  * A cacheinfo walker that adds associativity, line-size, and size properties
7129ab5bb018SKeith M Wesolowski  * to the devinfo node it is passed as an argument.
7130ab5bb018SKeith M Wesolowski  */
7131ab5bb018SKeith M Wesolowski static int
add_cacheent_props(void * arg,const struct cachetab * ct)7132ab5bb018SKeith M Wesolowski add_cacheent_props(void *arg, const struct cachetab *ct)
7133ab5bb018SKeith M Wesolowski {
7134ab5bb018SKeith M Wesolowski 	dev_info_t *devi = arg;
7135ab5bb018SKeith M Wesolowski 
7136ab5bb018SKeith M Wesolowski 	add_cache_prop(devi, ct->ct_label, assoc_str, ct->ct_assoc);
7137ab5bb018SKeith M Wesolowski 	if (ct->ct_line_size != 0)
7138ab5bb018SKeith M Wesolowski 		add_cache_prop(devi, ct->ct_label, line_str,
7139ab5bb018SKeith M Wesolowski 		    ct->ct_line_size);
7140ab5bb018SKeith M Wesolowski 	add_cache_prop(devi, ct->ct_label, size_str, ct->ct_size);
7141ab5bb018SKeith M Wesolowski 	return (0);
7142ab5bb018SKeith M Wesolowski }
7143ab5bb018SKeith M Wesolowski 
7144ab5bb018SKeith M Wesolowski 
7145ab5bb018SKeith M Wesolowski static const char fully_assoc[] = "fully-associative?";
7146ab5bb018SKeith M Wesolowski 
7147ab5bb018SKeith M Wesolowski /*
7148ab5bb018SKeith M Wesolowski  * AMD style cache/tlb description
7149ab5bb018SKeith M Wesolowski  *
7150ab5bb018SKeith M Wesolowski  * Extended functions 5 and 6 directly describe properties of
7151ab5bb018SKeith M Wesolowski  * tlbs and various cache levels.
7152ab5bb018SKeith M Wesolowski  */
7153ab5bb018SKeith M Wesolowski static void
add_amd_assoc(dev_info_t * devi,const char * label,uint_t assoc)7154ab5bb018SKeith M Wesolowski add_amd_assoc(dev_info_t *devi, const char *label, uint_t assoc)
7155ab5bb018SKeith M Wesolowski {
7156ab5bb018SKeith M Wesolowski 	switch (assoc) {
7157ab5bb018SKeith M Wesolowski 	case 0:	/* reserved; ignore */
7158ab5bb018SKeith M Wesolowski 		break;
7159ab5bb018SKeith M Wesolowski 	default:
7160ab5bb018SKeith M Wesolowski 		add_cache_prop(devi, label, assoc_str, assoc);
7161ab5bb018SKeith M Wesolowski 		break;
7162ab5bb018SKeith M Wesolowski 	case 0xff:
7163ab5bb018SKeith M Wesolowski 		add_cache_prop(devi, label, fully_assoc, 1);
7164ab5bb018SKeith M Wesolowski 		break;
7165ab5bb018SKeith M Wesolowski 	}
7166ab5bb018SKeith M Wesolowski }
7167ab5bb018SKeith M Wesolowski 
7168ab5bb018SKeith M Wesolowski static void
add_amd_tlb(dev_info_t * devi,const char * label,uint_t assoc,uint_t size)7169ab5bb018SKeith M Wesolowski add_amd_tlb(dev_info_t *devi, const char *label, uint_t assoc, uint_t size)
7170ab5bb018SKeith M Wesolowski {
7171ab5bb018SKeith M Wesolowski 	if (size == 0)
7172ab5bb018SKeith M Wesolowski 		return;
7173ab5bb018SKeith M Wesolowski 	add_cache_prop(devi, label, size_str, size);
7174ab5bb018SKeith M Wesolowski 	add_amd_assoc(devi, label, assoc);
7175ab5bb018SKeith M Wesolowski }
7176ab5bb018SKeith M Wesolowski 
7177ab5bb018SKeith M Wesolowski static void
add_amd_cache(dev_info_t * devi,const char * label,uint_t size,uint_t assoc,uint_t lines_per_tag,uint_t line_size)7178ab5bb018SKeith M Wesolowski add_amd_cache(dev_info_t *devi, const char *label,
7179ab5bb018SKeith M Wesolowski     uint_t size, uint_t assoc, uint_t lines_per_tag, uint_t line_size)
7180ab5bb018SKeith M Wesolowski {
7181ab5bb018SKeith M Wesolowski 	if (size == 0 || line_size == 0)
7182ab5bb018SKeith M Wesolowski 		return;
7183ab5bb018SKeith M Wesolowski 	add_amd_assoc(devi, label, assoc);
7184ab5bb018SKeith M Wesolowski 	/*
7185ab5bb018SKeith M Wesolowski 	 * Most AMD parts have a sectored cache. Multiple cache lines are
7186ab5bb018SKeith M Wesolowski 	 * associated with each tag. A sector consists of all cache lines
7187ab5bb018SKeith M Wesolowski 	 * associated with a tag. For example, the AMD K6-III has a sector
7188ab5bb018SKeith M Wesolowski 	 * size of 2 cache lines per tag.
7189ab5bb018SKeith M Wesolowski 	 */
7190ab5bb018SKeith M Wesolowski 	if (lines_per_tag != 0)
7191ab5bb018SKeith M Wesolowski 		add_cache_prop(devi, label, "lines-per-tag", lines_per_tag);
7192ab5bb018SKeith M Wesolowski 	add_cache_prop(devi, label, line_str, line_size);
7193ab5bb018SKeith M Wesolowski 	add_cache_prop(devi, label, size_str, size * 1024);
7194ab5bb018SKeith M Wesolowski }
7195ab5bb018SKeith M Wesolowski 
7196ab5bb018SKeith M Wesolowski static void
add_amd_l2_assoc(dev_info_t * devi,const char * label,uint_t assoc)7197ab5bb018SKeith M Wesolowski add_amd_l2_assoc(dev_info_t *devi, const char *label, uint_t assoc)
7198ab5bb018SKeith M Wesolowski {
7199ab5bb018SKeith M Wesolowski 	switch (assoc) {
7200ab5bb018SKeith M Wesolowski 	case 0:	/* off */
7201ab5bb018SKeith M Wesolowski 		break;
7202ab5bb018SKeith M Wesolowski 	case 1:
7203ab5bb018SKeith M Wesolowski 	case 2:
7204ab5bb018SKeith M Wesolowski 	case 4:
7205ab5bb018SKeith M Wesolowski 		add_cache_prop(devi, label, assoc_str, assoc);
7206ab5bb018SKeith M Wesolowski 		break;
7207ab5bb018SKeith M Wesolowski 	case 6:
7208ab5bb018SKeith M Wesolowski 		add_cache_prop(devi, label, assoc_str, 8);
7209ab5bb018SKeith M Wesolowski 		break;
7210ab5bb018SKeith M Wesolowski 	case 8:
7211ab5bb018SKeith M Wesolowski 		add_cache_prop(devi, label, assoc_str, 16);
7212ab5bb018SKeith M Wesolowski 		break;
7213ab5bb018SKeith M Wesolowski 	case 0xf:
7214ab5bb018SKeith M Wesolowski 		add_cache_prop(devi, label, fully_assoc, 1);
7215ab5bb018SKeith M Wesolowski 		break;
7216ab5bb018SKeith M Wesolowski 	default: /* reserved; ignore */
7217ab5bb018SKeith M Wesolowski 		break;
7218ab5bb018SKeith M Wesolowski 	}
7219ab5bb018SKeith M Wesolowski }
7220ab5bb018SKeith M Wesolowski 
7221ab5bb018SKeith M Wesolowski static void
add_amd_l2_tlb(dev_info_t * devi,const char * label,uint_t assoc,uint_t size)7222ab5bb018SKeith M Wesolowski add_amd_l2_tlb(dev_info_t *devi, const char *label, uint_t assoc, uint_t size)
7223ab5bb018SKeith M Wesolowski {
7224ab5bb018SKeith M Wesolowski 	if (size == 0 || assoc == 0)
7225ab5bb018SKeith M Wesolowski 		return;
7226ab5bb018SKeith M Wesolowski 	add_amd_l2_assoc(devi, label, assoc);
7227ab5bb018SKeith M Wesolowski 	add_cache_prop(devi, label, size_str, size);
7228ab5bb018SKeith M Wesolowski }
7229ab5bb018SKeith M Wesolowski 
7230ab5bb018SKeith M Wesolowski static void
add_amd_l2_cache(dev_info_t * devi,const char * label,uint_t size,uint_t assoc,uint_t lines_per_tag,uint_t line_size)7231ab5bb018SKeith M Wesolowski add_amd_l2_cache(dev_info_t *devi, const char *label,
7232ab5bb018SKeith M Wesolowski     uint_t size, uint_t assoc, uint_t lines_per_tag, uint_t line_size)
7233ab5bb018SKeith M Wesolowski {
7234ab5bb018SKeith M Wesolowski 	if (size == 0 || assoc == 0 || line_size == 0)
7235ab5bb018SKeith M Wesolowski 		return;
7236ab5bb018SKeith M Wesolowski 	add_amd_l2_assoc(devi, label, assoc);
7237ab5bb018SKeith M Wesolowski 	if (lines_per_tag != 0)
7238ab5bb018SKeith M Wesolowski 		add_cache_prop(devi, label, "lines-per-tag", lines_per_tag);
7239ab5bb018SKeith M Wesolowski 	add_cache_prop(devi, label, line_str, line_size);
7240ab5bb018SKeith M Wesolowski 	add_cache_prop(devi, label, size_str, size * 1024);
7241ab5bb018SKeith M Wesolowski }
7242ab5bb018SKeith M Wesolowski 
7243ab5bb018SKeith M Wesolowski static void
amd_cache_info(struct cpuid_info * cpi,dev_info_t * devi)7244ab5bb018SKeith M Wesolowski amd_cache_info(struct cpuid_info *cpi, dev_info_t *devi)
7245ab5bb018SKeith M Wesolowski {
7246ab5bb018SKeith M Wesolowski 	struct cpuid_regs *cp;
7247ab5bb018SKeith M Wesolowski 
7248ab5bb018SKeith M Wesolowski 	if (cpi->cpi_xmaxeax < 0x80000005)
7249ab5bb018SKeith M Wesolowski 		return;
7250ab5bb018SKeith M Wesolowski 	cp = &cpi->cpi_extd[5];
7251ab5bb018SKeith M Wesolowski 
7252ab5bb018SKeith M Wesolowski 	/*
7253ab5bb018SKeith M Wesolowski 	 * 4M/2M L1 TLB configuration
7254ab5bb018SKeith M Wesolowski 	 *
7255ab5bb018SKeith M Wesolowski 	 * We report the size for 2M pages because AMD uses two
7256ab5bb018SKeith M Wesolowski 	 * TLB entries for one 4M page.
7257ab5bb018SKeith M Wesolowski 	 */
7258ab5bb018SKeith M Wesolowski 	add_amd_tlb(devi, "dtlb-2M",
7259ab5bb018SKeith M Wesolowski 	    BITX(cp->cp_eax, 31, 24), BITX(cp->cp_eax, 23, 16));
7260ab5bb018SKeith M Wesolowski 	add_amd_tlb(devi, "itlb-2M",
7261ab5bb018SKeith M Wesolowski 	    BITX(cp->cp_eax, 15, 8), BITX(cp->cp_eax, 7, 0));
7262ab5bb018SKeith M Wesolowski 
7263ab5bb018SKeith M Wesolowski 	/*
7264ab5bb018SKeith M Wesolowski 	 * 4K L1 TLB configuration
7265ab5bb018SKeith M Wesolowski 	 */
7266ab5bb018SKeith M Wesolowski 
7267ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7268ab5bb018SKeith M Wesolowski 		uint_t nentries;
7269ab5bb018SKeith M Wesolowski 	case X86_VENDOR_TM:
7270ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family >= 5) {
7271ab5bb018SKeith M Wesolowski 			/*
7272ab5bb018SKeith M Wesolowski 			 * Crusoe processors have 256 TLB entries, but
7273ab5bb018SKeith M Wesolowski 			 * cpuid data format constrains them to only
7274ab5bb018SKeith M Wesolowski 			 * reporting 255 of them.
7275ab5bb018SKeith M Wesolowski 			 */
7276ab5bb018SKeith M Wesolowski 			if ((nentries = BITX(cp->cp_ebx, 23, 16)) == 255)
7277ab5bb018SKeith M Wesolowski 				nentries = 256;
7278ab5bb018SKeith M Wesolowski 			/*
7279ab5bb018SKeith M Wesolowski 			 * Crusoe processors also have a unified TLB
7280ab5bb018SKeith M Wesolowski 			 */
7281ab5bb018SKeith M Wesolowski 			add_amd_tlb(devi, "tlb-4K", BITX(cp->cp_ebx, 31, 24),
7282ab5bb018SKeith M Wesolowski 			    nentries);
7283ab5bb018SKeith M Wesolowski 			break;
7284ab5bb018SKeith M Wesolowski 		}
7285ab5bb018SKeith M Wesolowski 		/*FALLTHROUGH*/
7286ab5bb018SKeith M Wesolowski 	default:
7287ab5bb018SKeith M Wesolowski 		add_amd_tlb(devi, itlb4k_str,
7288ab5bb018SKeith M Wesolowski 		    BITX(cp->cp_ebx, 31, 24), BITX(cp->cp_ebx, 23, 16));
7289ab5bb018SKeith M Wesolowski 		add_amd_tlb(devi, dtlb4k_str,
7290ab5bb018SKeith M Wesolowski 		    BITX(cp->cp_ebx, 15, 8), BITX(cp->cp_ebx, 7, 0));
7291ab5bb018SKeith M Wesolowski 		break;
7292ab5bb018SKeith M Wesolowski 	}
7293ab5bb018SKeith M Wesolowski 
7294ab5bb018SKeith M Wesolowski 	/*
7295ab5bb018SKeith M Wesolowski 	 * data L1 cache configuration
7296ab5bb018SKeith M Wesolowski 	 */
7297ab5bb018SKeith M Wesolowski 
7298ab5bb018SKeith M Wesolowski 	add_amd_cache(devi, l1_dcache_str,
7299ab5bb018SKeith M Wesolowski 	    BITX(cp->cp_ecx, 31, 24), BITX(cp->cp_ecx, 23, 16),
7300ab5bb018SKeith M Wesolowski 	    BITX(cp->cp_ecx, 15, 8), BITX(cp->cp_ecx, 7, 0));
7301ab5bb018SKeith M Wesolowski 
7302ab5bb018SKeith M Wesolowski 	/*
7303ab5bb018SKeith M Wesolowski 	 * code L1 cache configuration
7304ab5bb018SKeith M Wesolowski 	 */
7305ab5bb018SKeith M Wesolowski 
7306ab5bb018SKeith M Wesolowski 	add_amd_cache(devi, l1_icache_str,
7307ab5bb018SKeith M Wesolowski 	    BITX(cp->cp_edx, 31, 24), BITX(cp->cp_edx, 23, 16),
7308ab5bb018SKeith M Wesolowski 	    BITX(cp->cp_edx, 15, 8), BITX(cp->cp_edx, 7, 0));
7309ab5bb018SKeith M Wesolowski 
7310ab5bb018SKeith M Wesolowski 	if (cpi->cpi_xmaxeax < 0x80000006)
7311ab5bb018SKeith M Wesolowski 		return;
7312ab5bb018SKeith M Wesolowski 	cp = &cpi->cpi_extd[6];
7313ab5bb018SKeith M Wesolowski 
7314ab5bb018SKeith M Wesolowski 	/* Check for a unified L2 TLB for large pages */
7315ab5bb018SKeith M Wesolowski 
7316ab5bb018SKeith M Wesolowski 	if (BITX(cp->cp_eax, 31, 16) == 0)
7317ab5bb018SKeith M Wesolowski 		add_amd_l2_tlb(devi, "l2-tlb-2M",
7318ab5bb018SKeith M Wesolowski 		    BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0));
7319ab5bb018SKeith M Wesolowski 	else {
7320ab5bb018SKeith M Wesolowski 		add_amd_l2_tlb(devi, "l2-dtlb-2M",
7321ab5bb018SKeith M Wesolowski 		    BITX(cp->cp_eax, 31, 28), BITX(cp->cp_eax, 27, 16));
7322ab5bb018SKeith M Wesolowski 		add_amd_l2_tlb(devi, "l2-itlb-2M",
7323ab5bb018SKeith M Wesolowski 		    BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0));
7324ab5bb018SKeith M Wesolowski 	}
7325ab5bb018SKeith M Wesolowski 
7326ab5bb018SKeith M Wesolowski 	/* Check for a unified L2 TLB for 4K pages */
7327ab5bb018SKeith M Wesolowski 
7328ab5bb018SKeith M Wesolowski 	if (BITX(cp->cp_ebx, 31, 16) == 0) {
7329ab5bb018SKeith M Wesolowski 		add_amd_l2_tlb(devi, "l2-tlb-4K",
7330ab5bb018SKeith M Wesolowski 		    BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0));
7331ab5bb018SKeith M Wesolowski 	} else {
7332ab5bb018SKeith M Wesolowski 		add_amd_l2_tlb(devi, "l2-dtlb-4K",
7333ab5bb018SKeith M Wesolowski 		    BITX(cp->cp_eax, 31, 28), BITX(cp->cp_eax, 27, 16));
7334ab5bb018SKeith M Wesolowski 		add_amd_l2_tlb(devi, "l2-itlb-4K",
7335ab5bb018SKeith M Wesolowski 		    BITX(cp->cp_eax, 15, 12), BITX(cp->cp_eax, 11, 0));
7336ab5bb018SKeith M Wesolowski 	}
7337ab5bb018SKeith M Wesolowski 
7338ab5bb018SKeith M Wesolowski 	add_amd_l2_cache(devi, l2_cache_str,
7339ab5bb018SKeith M Wesolowski 	    BITX(cp->cp_ecx, 31, 16), BITX(cp->cp_ecx, 15, 12),
7340ab5bb018SKeith M Wesolowski 	    BITX(cp->cp_ecx, 11, 8), BITX(cp->cp_ecx, 7, 0));
7341ab5bb018SKeith M Wesolowski }
7342ab5bb018SKeith M Wesolowski 
7343ab5bb018SKeith M Wesolowski /*
7344ab5bb018SKeith M Wesolowski  * There are two basic ways that the x86 world describes it cache
7345ab5bb018SKeith M Wesolowski  * and tlb architecture - Intel's way and AMD's way.
7346ab5bb018SKeith M Wesolowski  *
7347ab5bb018SKeith M Wesolowski  * Return which flavor of cache architecture we should use
7348ab5bb018SKeith M Wesolowski  */
7349ab5bb018SKeith M Wesolowski static int
x86_which_cacheinfo(struct cpuid_info * cpi)7350ab5bb018SKeith M Wesolowski x86_which_cacheinfo(struct cpuid_info *cpi)
7351ab5bb018SKeith M Wesolowski {
7352ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7353ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7354ab5bb018SKeith M Wesolowski 		if (cpi->cpi_maxeax >= 2)
7355ab5bb018SKeith M Wesolowski 			return (X86_VENDOR_Intel);
7356ab5bb018SKeith M Wesolowski 		break;
7357ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
7358ab5bb018SKeith M Wesolowski 		/*
7359ab5bb018SKeith M Wesolowski 		 * The K5 model 1 was the first part from AMD that reported
7360ab5bb018SKeith M Wesolowski 		 * cache sizes via extended cpuid functions.
7361ab5bb018SKeith M Wesolowski 		 */
7362ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family > 5 ||
7363ab5bb018SKeith M Wesolowski 		    (cpi->cpi_family == 5 && cpi->cpi_model >= 1))
7364ab5bb018SKeith M Wesolowski 			return (X86_VENDOR_AMD);
7365ab5bb018SKeith M Wesolowski 		break;
7366ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
7367ab5bb018SKeith M Wesolowski 		return (X86_VENDOR_AMD);
7368ab5bb018SKeith M Wesolowski 	case X86_VENDOR_TM:
7369ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family >= 5)
7370ab5bb018SKeith M Wesolowski 			return (X86_VENDOR_AMD);
7371ab5bb018SKeith M Wesolowski 		/*FALLTHROUGH*/
7372ab5bb018SKeith M Wesolowski 	default:
7373ab5bb018SKeith M Wesolowski 		/*
7374ab5bb018SKeith M Wesolowski 		 * If they have extended CPU data for 0x80000005
7375ab5bb018SKeith M Wesolowski 		 * then we assume they have AMD-format cache
7376ab5bb018SKeith M Wesolowski 		 * information.
7377ab5bb018SKeith M Wesolowski 		 *
7378ab5bb018SKeith M Wesolowski 		 * If not, and the vendor happens to be Cyrix,
7379ab5bb018SKeith M Wesolowski 		 * then try our-Cyrix specific handler.
7380ab5bb018SKeith M Wesolowski 		 *
7381ab5bb018SKeith M Wesolowski 		 * If we're not Cyrix, then assume we're using Intel's
7382ab5bb018SKeith M Wesolowski 		 * table-driven format instead.
7383ab5bb018SKeith M Wesolowski 		 */
7384ab5bb018SKeith M Wesolowski 		if (cpi->cpi_xmaxeax >= 0x80000005)
7385ab5bb018SKeith M Wesolowski 			return (X86_VENDOR_AMD);
7386ab5bb018SKeith M Wesolowski 		else if (cpi->cpi_vendor == X86_VENDOR_Cyrix)
7387ab5bb018SKeith M Wesolowski 			return (X86_VENDOR_Cyrix);
7388ab5bb018SKeith M Wesolowski 		else if (cpi->cpi_maxeax >= 2)
7389ab5bb018SKeith M Wesolowski 			return (X86_VENDOR_Intel);
7390ab5bb018SKeith M Wesolowski 		break;
7391ab5bb018SKeith M Wesolowski 	}
7392ab5bb018SKeith M Wesolowski 	return (-1);
7393ab5bb018SKeith M Wesolowski }
7394ab5bb018SKeith M Wesolowski 
7395ab5bb018SKeith M Wesolowski void
cpuid_set_cpu_properties(void * dip,processorid_t cpu_id,struct cpuid_info * cpi)7396ab5bb018SKeith M Wesolowski cpuid_set_cpu_properties(void *dip, processorid_t cpu_id,
7397ab5bb018SKeith M Wesolowski     struct cpuid_info *cpi)
7398ab5bb018SKeith M Wesolowski {
7399ab5bb018SKeith M Wesolowski 	dev_info_t *cpu_devi;
7400ab5bb018SKeith M Wesolowski 	int create;
7401ab5bb018SKeith M Wesolowski 
7402ab5bb018SKeith M Wesolowski 	cpu_devi = (dev_info_t *)dip;
7403ab5bb018SKeith M Wesolowski 
7404ab5bb018SKeith M Wesolowski 	/* device_type */
7405ab5bb018SKeith M Wesolowski 	(void) ndi_prop_update_string(DDI_DEV_T_NONE, cpu_devi,
7406ab5bb018SKeith M Wesolowski 	    "device_type", "cpu");
7407ab5bb018SKeith M Wesolowski 
7408ab5bb018SKeith M Wesolowski 	/* reg */
7409ab5bb018SKeith M Wesolowski 	(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7410ab5bb018SKeith M Wesolowski 	    "reg", cpu_id);
7411ab5bb018SKeith M Wesolowski 
7412ab5bb018SKeith M Wesolowski 	/* cpu-mhz, and clock-frequency */
7413ab5bb018SKeith M Wesolowski 	if (cpu_freq > 0) {
7414ab5bb018SKeith M Wesolowski 		long long mul;
7415ab5bb018SKeith M Wesolowski 
7416ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7417ab5bb018SKeith M Wesolowski 		    "cpu-mhz", cpu_freq);
7418ab5bb018SKeith M Wesolowski 		if ((mul = cpu_freq * 1000000LL) <= INT_MAX)
7419ab5bb018SKeith M Wesolowski 			(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7420ab5bb018SKeith M Wesolowski 			    "clock-frequency", (int)mul);
7421ab5bb018SKeith M Wesolowski 	}
7422ab5bb018SKeith M Wesolowski 
7423ab5bb018SKeith M Wesolowski 	ASSERT(is_x86_feature(x86_featureset, X86FSET_CPUID));
7424ab5bb018SKeith M Wesolowski 
7425ab5bb018SKeith M Wesolowski 	/* vendor-id */
7426ab5bb018SKeith M Wesolowski 	(void) ndi_prop_update_string(DDI_DEV_T_NONE, cpu_devi,
7427ab5bb018SKeith M Wesolowski 	    "vendor-id", cpi->cpi_vendorstr);
7428ab5bb018SKeith M Wesolowski 
7429ab5bb018SKeith M Wesolowski 	if (cpi->cpi_maxeax == 0) {
7430ab5bb018SKeith M Wesolowski 		return;
7431ab5bb018SKeith M Wesolowski 	}
7432ab5bb018SKeith M Wesolowski 
7433ab5bb018SKeith M Wesolowski 	/*
7434ab5bb018SKeith M Wesolowski 	 * family, model, and step
7435ab5bb018SKeith M Wesolowski 	 */
7436ab5bb018SKeith M Wesolowski 	(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7437ab5bb018SKeith M Wesolowski 	    "family", CPI_FAMILY(cpi));
7438ab5bb018SKeith M Wesolowski 	(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7439ab5bb018SKeith M Wesolowski 	    "cpu-model", CPI_MODEL(cpi));
7440ab5bb018SKeith M Wesolowski 	(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7441ab5bb018SKeith M Wesolowski 	    "stepping-id", CPI_STEP(cpi));
7442ab5bb018SKeith M Wesolowski 
7443ab5bb018SKeith M Wesolowski 	/* type */
7444ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7445ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7446ab5bb018SKeith M Wesolowski 		create = 1;
7447ab5bb018SKeith M Wesolowski 		break;
7448ab5bb018SKeith M Wesolowski 	default:
7449ab5bb018SKeith M Wesolowski 		create = 0;
7450ab5bb018SKeith M Wesolowski 		break;
7451ab5bb018SKeith M Wesolowski 	}
7452ab5bb018SKeith M Wesolowski 	if (create)
7453ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7454ab5bb018SKeith M Wesolowski 		    "type", CPI_TYPE(cpi));
7455ab5bb018SKeith M Wesolowski 
7456ab5bb018SKeith M Wesolowski 	/* ext-family */
7457ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7458ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7459ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
7460ab5bb018SKeith M Wesolowski 		create = cpi->cpi_family >= 0xf;
7461ab5bb018SKeith M Wesolowski 		break;
7462ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
7463ab5bb018SKeith M Wesolowski 		create = 1;
7464ab5bb018SKeith M Wesolowski 		break;
7465ab5bb018SKeith M Wesolowski 	default:
7466ab5bb018SKeith M Wesolowski 		create = 0;
7467ab5bb018SKeith M Wesolowski 		break;
7468ab5bb018SKeith M Wesolowski 	}
7469ab5bb018SKeith M Wesolowski 	if (create)
7470ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7471ab5bb018SKeith M Wesolowski 		    "ext-family", CPI_FAMILY_XTD(cpi));
7472ab5bb018SKeith M Wesolowski 
7473ab5bb018SKeith M Wesolowski 	/* ext-model */
7474ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7475ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7476ab5bb018SKeith M Wesolowski 		create = IS_EXTENDED_MODEL_INTEL(cpi);
7477ab5bb018SKeith M Wesolowski 		break;
7478ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
7479ab5bb018SKeith M Wesolowski 		create = CPI_FAMILY(cpi) == 0xf;
7480ab5bb018SKeith M Wesolowski 		break;
7481ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
7482ab5bb018SKeith M Wesolowski 		create = 1;
7483ab5bb018SKeith M Wesolowski 		break;
7484ab5bb018SKeith M Wesolowski 	default:
7485ab5bb018SKeith M Wesolowski 		create = 0;
7486ab5bb018SKeith M Wesolowski 		break;
7487ab5bb018SKeith M Wesolowski 	}
7488ab5bb018SKeith M Wesolowski 	if (create)
7489ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7490ab5bb018SKeith M Wesolowski 		    "ext-model", CPI_MODEL_XTD(cpi));
7491ab5bb018SKeith M Wesolowski 
7492ab5bb018SKeith M Wesolowski 	/* generation */
7493ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7494ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
7495ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
7496ab5bb018SKeith M Wesolowski 		/*
7497ab5bb018SKeith M Wesolowski 		 * AMD K5 model 1 was the first part to support this
7498ab5bb018SKeith M Wesolowski 		 */
7499ab5bb018SKeith M Wesolowski 		create = cpi->cpi_xmaxeax >= 0x80000001;
7500ab5bb018SKeith M Wesolowski 		break;
7501ab5bb018SKeith M Wesolowski 	default:
7502ab5bb018SKeith M Wesolowski 		create = 0;
7503ab5bb018SKeith M Wesolowski 		break;
7504ab5bb018SKeith M Wesolowski 	}
7505ab5bb018SKeith M Wesolowski 	if (create)
7506ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7507ab5bb018SKeith M Wesolowski 		    "generation", BITX((cpi)->cpi_extd[1].cp_eax, 11, 8));
7508ab5bb018SKeith M Wesolowski 
7509ab5bb018SKeith M Wesolowski 	/* brand-id */
7510ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7511ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7512ab5bb018SKeith M Wesolowski 		/*
7513ab5bb018SKeith M Wesolowski 		 * brand id first appeared on Pentium III Xeon model 8,
7514ab5bb018SKeith M Wesolowski 		 * and Celeron model 8 processors and Opteron
7515ab5bb018SKeith M Wesolowski 		 */
7516ab5bb018SKeith M Wesolowski 		create = cpi->cpi_family > 6 ||
7517ab5bb018SKeith M Wesolowski 		    (cpi->cpi_family == 6 && cpi->cpi_model >= 8);
7518ab5bb018SKeith M Wesolowski 		break;
7519ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
7520ab5bb018SKeith M Wesolowski 		create = cpi->cpi_family >= 0xf;
7521ab5bb018SKeith M Wesolowski 		break;
7522ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
7523ab5bb018SKeith M Wesolowski 		create = 1;
7524ab5bb018SKeith M Wesolowski 		break;
7525ab5bb018SKeith M Wesolowski 	default:
7526ab5bb018SKeith M Wesolowski 		create = 0;
7527ab5bb018SKeith M Wesolowski 		break;
7528ab5bb018SKeith M Wesolowski 	}
7529ab5bb018SKeith M Wesolowski 	if (create && cpi->cpi_brandid != 0) {
7530ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7531ab5bb018SKeith M Wesolowski 		    "brand-id", cpi->cpi_brandid);
7532ab5bb018SKeith M Wesolowski 	}
7533ab5bb018SKeith M Wesolowski 
7534ab5bb018SKeith M Wesolowski 	/* chunks, and apic-id */
7535ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7536ab5bb018SKeith M Wesolowski 		/*
7537ab5bb018SKeith M Wesolowski 		 * first available on Pentium IV and Opteron (K8)
7538ab5bb018SKeith M Wesolowski 		 */
7539ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7540ab5bb018SKeith M Wesolowski 		create = IS_NEW_F6(cpi) || cpi->cpi_family >= 0xf;
7541ab5bb018SKeith M Wesolowski 		break;
7542ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
7543ab5bb018SKeith M Wesolowski 		create = cpi->cpi_family >= 0xf;
7544ab5bb018SKeith M Wesolowski 		break;
7545ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
7546ab5bb018SKeith M Wesolowski 		create = 1;
7547ab5bb018SKeith M Wesolowski 		break;
7548ab5bb018SKeith M Wesolowski 	default:
7549ab5bb018SKeith M Wesolowski 		create = 0;
7550ab5bb018SKeith M Wesolowski 		break;
7551ab5bb018SKeith M Wesolowski 	}
7552ab5bb018SKeith M Wesolowski 	if (create) {
7553ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7554ab5bb018SKeith M Wesolowski 		    "chunks", CPI_CHUNKS(cpi));
7555ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7556ab5bb018SKeith M Wesolowski 		    "apic-id", cpi->cpi_apicid);
7557ab5bb018SKeith M Wesolowski 		if (cpi->cpi_chipid >= 0) {
7558ab5bb018SKeith M Wesolowski 			(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7559ab5bb018SKeith M Wesolowski 			    "chip#", cpi->cpi_chipid);
7560ab5bb018SKeith M Wesolowski 			(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7561ab5bb018SKeith M Wesolowski 			    "clog#", cpi->cpi_clogid);
7562ab5bb018SKeith M Wesolowski 		}
7563ab5bb018SKeith M Wesolowski 	}
7564ab5bb018SKeith M Wesolowski 
7565ab5bb018SKeith M Wesolowski 	/* cpuid-features */
7566ab5bb018SKeith M Wesolowski 	(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7567ab5bb018SKeith M Wesolowski 	    "cpuid-features", CPI_FEATURES_EDX(cpi));
7568ab5bb018SKeith M Wesolowski 
7569ab5bb018SKeith M Wesolowski 
7570ab5bb018SKeith M Wesolowski 	/* cpuid-features-ecx */
7571ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7572ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7573ab5bb018SKeith M Wesolowski 		create = IS_NEW_F6(cpi) || cpi->cpi_family >= 0xf;
7574ab5bb018SKeith M Wesolowski 		break;
7575ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
7576ab5bb018SKeith M Wesolowski 		create = cpi->cpi_family >= 0xf;
7577ab5bb018SKeith M Wesolowski 		break;
7578ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
7579ab5bb018SKeith M Wesolowski 		create = 1;
7580ab5bb018SKeith M Wesolowski 		break;
7581ab5bb018SKeith M Wesolowski 	default:
7582ab5bb018SKeith M Wesolowski 		create = 0;
7583ab5bb018SKeith M Wesolowski 		break;
7584ab5bb018SKeith M Wesolowski 	}
7585ab5bb018SKeith M Wesolowski 	if (create)
7586ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7587ab5bb018SKeith M Wesolowski 		    "cpuid-features-ecx", CPI_FEATURES_ECX(cpi));
7588ab5bb018SKeith M Wesolowski 
7589ab5bb018SKeith M Wesolowski 	/* ext-cpuid-features */
7590ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7591ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7592ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
7593ab5bb018SKeith M Wesolowski 	case X86_VENDOR_HYGON:
7594ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Cyrix:
7595ab5bb018SKeith M Wesolowski 	case X86_VENDOR_TM:
7596ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Centaur:
7597ab5bb018SKeith M Wesolowski 		create = cpi->cpi_xmaxeax >= 0x80000001;
7598ab5bb018SKeith M Wesolowski 		break;
7599ab5bb018SKeith M Wesolowski 	default:
7600ab5bb018SKeith M Wesolowski 		create = 0;
7601ab5bb018SKeith M Wesolowski 		break;
7602ab5bb018SKeith M Wesolowski 	}
7603ab5bb018SKeith M Wesolowski 	if (create) {
7604ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7605ab5bb018SKeith M Wesolowski 		    "ext-cpuid-features", CPI_FEATURES_XTD_EDX(cpi));
7606ab5bb018SKeith M Wesolowski 		(void) ndi_prop_update_int(DDI_DEV_T_NONE, cpu_devi,
7607ab5bb018SKeith M Wesolowski 		    "ext-cpuid-features-ecx", CPI_FEATURES_XTD_ECX(cpi));
7608ab5bb018SKeith M Wesolowski 	}
7609ab5bb018SKeith M Wesolowski 
7610ab5bb018SKeith M Wesolowski 	/*
7611ab5bb018SKeith M Wesolowski 	 * Brand String first appeared in Intel Pentium IV, AMD K5
7612ab5bb018SKeith M Wesolowski 	 * model 1, and Cyrix GXm.  On earlier models we try and
7613ab5bb018SKeith M Wesolowski 	 * simulate something similar .. so this string should always
7614ab5bb018SKeith M Wesolowski 	 * same -something- about the processor, however lame.
7615ab5bb018SKeith M Wesolowski 	 */
7616ab5bb018SKeith M Wesolowski 	(void) ndi_prop_update_string(DDI_DEV_T_NONE, cpu_devi,
7617ab5bb018SKeith M Wesolowski 	    "brand-string", cpi->cpi_brandstr);
7618ab5bb018SKeith M Wesolowski 
7619ab5bb018SKeith M Wesolowski 	/*
7620ab5bb018SKeith M Wesolowski 	 * Finally, cache and tlb information
7621ab5bb018SKeith M Wesolowski 	 */
7622ab5bb018SKeith M Wesolowski 	switch (x86_which_cacheinfo(cpi)) {
7623ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7624ab5bb018SKeith M Wesolowski 		intel_walk_cacheinfo(cpi, cpu_devi, add_cacheent_props);
7625ab5bb018SKeith M Wesolowski 		break;
7626ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Cyrix:
7627ab5bb018SKeith M Wesolowski 		cyrix_walk_cacheinfo(cpi, cpu_devi, add_cacheent_props);
7628ab5bb018SKeith M Wesolowski 		break;
7629ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
7630ab5bb018SKeith M Wesolowski 		amd_cache_info(cpi, cpu_devi);
7631ab5bb018SKeith M Wesolowski 		break;
7632ab5bb018SKeith M Wesolowski 	default:
7633ab5bb018SKeith M Wesolowski 		break;
7634ab5bb018SKeith M Wesolowski 	}
7635ab5bb018SKeith M Wesolowski }
7636ab5bb018SKeith M Wesolowski 
7637ab5bb018SKeith M Wesolowski struct l2info {
7638ab5bb018SKeith M Wesolowski 	int *l2i_csz;
7639ab5bb018SKeith M Wesolowski 	int *l2i_lsz;
7640ab5bb018SKeith M Wesolowski 	int *l2i_assoc;
7641ab5bb018SKeith M Wesolowski 	int l2i_ret;
7642ab5bb018SKeith M Wesolowski };
7643ab5bb018SKeith M Wesolowski 
7644ab5bb018SKeith M Wesolowski /*
7645ab5bb018SKeith M Wesolowski  * A cacheinfo walker that fetches the size, line-size and associativity
7646ab5bb018SKeith M Wesolowski  * of the L2 cache
7647ab5bb018SKeith M Wesolowski  */
7648ab5bb018SKeith M Wesolowski static int
intel_l2cinfo(void * arg,const struct cachetab * ct)7649ab5bb018SKeith M Wesolowski intel_l2cinfo(void *arg, const struct cachetab *ct)
7650ab5bb018SKeith M Wesolowski {
7651ab5bb018SKeith M Wesolowski 	struct l2info *l2i = arg;
7652ab5bb018SKeith M Wesolowski 	int *ip;
7653ab5bb018SKeith M Wesolowski 
7654ab5bb018SKeith M Wesolowski 	if (ct->ct_label != l2_cache_str &&
7655ab5bb018SKeith M Wesolowski 	    ct->ct_label != sl2_cache_str)
7656ab5bb018SKeith M Wesolowski 		return (0);	/* not an L2 -- keep walking */
7657ab5bb018SKeith M Wesolowski 
7658ab5bb018SKeith M Wesolowski 	if ((ip = l2i->l2i_csz) != NULL)
7659ab5bb018SKeith M Wesolowski 		*ip = ct->ct_size;
7660ab5bb018SKeith M Wesolowski 	if ((ip = l2i->l2i_lsz) != NULL)
7661ab5bb018SKeith M Wesolowski 		*ip = ct->ct_line_size;
7662ab5bb018SKeith M Wesolowski 	if ((ip = l2i->l2i_assoc) != NULL)
7663ab5bb018SKeith M Wesolowski 		*ip = ct->ct_assoc;
7664ab5bb018SKeith M Wesolowski 	l2i->l2i_ret = ct->ct_size;
7665ab5bb018SKeith M Wesolowski 	return (1);		/* was an L2 -- terminate walk */
7666ab5bb018SKeith M Wesolowski }
7667ab5bb018SKeith M Wesolowski 
7668ab5bb018SKeith M Wesolowski /*
7669ab5bb018SKeith M Wesolowski  * AMD L2/L3 Cache and TLB Associativity Field Definition:
7670ab5bb018SKeith M Wesolowski  *
7671ab5bb018SKeith M Wesolowski  *	Unlike the associativity for the L1 cache and tlb where the 8 bit
7672ab5bb018SKeith M Wesolowski  *	value is the associativity, the associativity for the L2 cache and
7673ab5bb018SKeith M Wesolowski  *	tlb is encoded in the following table. The 4 bit L2 value serves as
7674ab5bb018SKeith M Wesolowski  *	an index into the amd_afd[] array to determine the associativity.
7675ab5bb018SKeith M Wesolowski  *	-1 is undefined. 0 is fully associative.
7676ab5bb018SKeith M Wesolowski  */
7677ab5bb018SKeith M Wesolowski 
7678ab5bb018SKeith M Wesolowski static int amd_afd[] =
7679ab5bb018SKeith M Wesolowski 	{-1, 1, 2, -1, 4, -1, 8, -1, 16, -1, 32, 48, 64, 96, 128, 0};
7680ab5bb018SKeith M Wesolowski 
7681ab5bb018SKeith M Wesolowski static void
amd_l2cacheinfo(struct cpuid_info * cpi,struct l2info * l2i)7682ab5bb018SKeith M Wesolowski amd_l2cacheinfo(struct cpuid_info *cpi, struct l2info *l2i)
7683ab5bb018SKeith M Wesolowski {
7684ab5bb018SKeith M Wesolowski 	struct cpuid_regs *cp;
7685ab5bb018SKeith M Wesolowski 	uint_t size, assoc;
7686ab5bb018SKeith M Wesolowski 	int i;
7687ab5bb018SKeith M Wesolowski 	int *ip;
7688ab5bb018SKeith M Wesolowski 
7689ab5bb018SKeith M Wesolowski 	if (cpi->cpi_xmaxeax < 0x80000006)
7690ab5bb018SKeith M Wesolowski 		return;
7691ab5bb018SKeith M Wesolowski 	cp = &cpi->cpi_extd[6];
7692ab5bb018SKeith M Wesolowski 
7693ab5bb018SKeith M Wesolowski 	if ((i = BITX(cp->cp_ecx, 15, 12)) != 0 &&
7694ab5bb018SKeith M Wesolowski 	    (size = BITX(cp->cp_ecx, 31, 16)) != 0) {
7695ab5bb018SKeith M Wesolowski 		uint_t cachesz = size * 1024;
7696ab5bb018SKeith M Wesolowski 		assoc = amd_afd[i];
7697ab5bb018SKeith M Wesolowski 
7698ab5bb018SKeith M Wesolowski 		ASSERT(assoc != -1);
7699ab5bb018SKeith M Wesolowski 
7700ab5bb018SKeith M Wesolowski 		if ((ip = l2i->l2i_csz) != NULL)
7701ab5bb018SKeith M Wesolowski 			*ip = cachesz;
7702ab5bb018SKeith M Wesolowski 		if ((ip = l2i->l2i_lsz) != NULL)
7703ab5bb018SKeith M Wesolowski 			*ip = BITX(cp->cp_ecx, 7, 0);
7704ab5bb018SKeith M Wesolowski 		if ((ip = l2i->l2i_assoc) != NULL)
7705ab5bb018SKeith M Wesolowski 			*ip = assoc;
7706ab5bb018SKeith M Wesolowski 		l2i->l2i_ret = cachesz;
7707ab5bb018SKeith M Wesolowski 	}
7708ab5bb018SKeith M Wesolowski }
7709ab5bb018SKeith M Wesolowski 
7710ab5bb018SKeith M Wesolowski int
getl2cacheinfo(cpu_t * cpu,int * csz,int * lsz,int * assoc)7711ab5bb018SKeith M Wesolowski getl2cacheinfo(cpu_t *cpu, int *csz, int *lsz, int *assoc)
7712ab5bb018SKeith M Wesolowski {
7713ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
7714ab5bb018SKeith M Wesolowski 	struct l2info __l2info, *l2i = &__l2info;
7715ab5bb018SKeith M Wesolowski 
7716ab5bb018SKeith M Wesolowski 	l2i->l2i_csz = csz;
7717ab5bb018SKeith M Wesolowski 	l2i->l2i_lsz = lsz;
7718ab5bb018SKeith M Wesolowski 	l2i->l2i_assoc = assoc;
7719ab5bb018SKeith M Wesolowski 	l2i->l2i_ret = -1;
7720ab5bb018SKeith M Wesolowski 
7721ab5bb018SKeith M Wesolowski 	switch (x86_which_cacheinfo(cpi)) {
7722ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7723ab5bb018SKeith M Wesolowski 		intel_walk_cacheinfo(cpi, l2i, intel_l2cinfo);
7724ab5bb018SKeith M Wesolowski 		break;
7725ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Cyrix:
7726ab5bb018SKeith M Wesolowski 		cyrix_walk_cacheinfo(cpi, l2i, intel_l2cinfo);
7727ab5bb018SKeith M Wesolowski 		break;
7728ab5bb018SKeith M Wesolowski 	case X86_VENDOR_AMD:
7729ab5bb018SKeith M Wesolowski 		amd_l2cacheinfo(cpi, l2i);
7730ab5bb018SKeith M Wesolowski 		break;
7731ab5bb018SKeith M Wesolowski 	default:
7732ab5bb018SKeith M Wesolowski 		break;
7733ab5bb018SKeith M Wesolowski 	}
7734ab5bb018SKeith M Wesolowski 	return (l2i->l2i_ret);
7735ab5bb018SKeith M Wesolowski }
7736ab5bb018SKeith M Wesolowski 
7737ab5bb018SKeith M Wesolowski #if !defined(__xpv)
7738ab5bb018SKeith M Wesolowski 
7739ab5bb018SKeith M Wesolowski uint32_t *
cpuid_mwait_alloc(cpu_t * cpu)7740ab5bb018SKeith M Wesolowski cpuid_mwait_alloc(cpu_t *cpu)
7741ab5bb018SKeith M Wesolowski {
7742ab5bb018SKeith M Wesolowski 	uint32_t	*ret;
7743ab5bb018SKeith M Wesolowski 	size_t		mwait_size;
7744ab5bb018SKeith M Wesolowski 
7745ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(CPU, CPUID_PASS_EXTENDED));
7746ab5bb018SKeith M Wesolowski 
7747ab5bb018SKeith M Wesolowski 	mwait_size = CPU->cpu_m.mcpu_cpi->cpi_mwait.mon_max;
7748ab5bb018SKeith M Wesolowski 	if (mwait_size == 0)
7749ab5bb018SKeith M Wesolowski 		return (NULL);
7750ab5bb018SKeith M Wesolowski 
7751ab5bb018SKeith M Wesolowski 	/*
7752ab5bb018SKeith M Wesolowski 	 * kmem_alloc() returns cache line size aligned data for mwait_size
7753ab5bb018SKeith M Wesolowski 	 * allocations.  mwait_size is currently cache line sized.  Neither
7754ab5bb018SKeith M Wesolowski 	 * of these implementation details are guarantied to be true in the
7755ab5bb018SKeith M Wesolowski 	 * future.
7756ab5bb018SKeith M Wesolowski 	 *
7757ab5bb018SKeith M Wesolowski 	 * First try allocating mwait_size as kmem_alloc() currently returns
7758ab5bb018SKeith M Wesolowski 	 * correctly aligned memory.  If kmem_alloc() does not return
7759ab5bb018SKeith M Wesolowski 	 * mwait_size aligned memory, then use mwait_size ROUNDUP.
7760ab5bb018SKeith M Wesolowski 	 *
7761ab5bb018SKeith M Wesolowski 	 * Set cpi_mwait.buf_actual and cpi_mwait.size_actual in case we
7762ab5bb018SKeith M Wesolowski 	 * decide to free this memory.
7763ab5bb018SKeith M Wesolowski 	 */
7764ab5bb018SKeith M Wesolowski 	ret = kmem_zalloc(mwait_size, KM_SLEEP);
7765ab5bb018SKeith M Wesolowski 	if (ret == (uint32_t *)P2ROUNDUP((uintptr_t)ret, mwait_size)) {
7766ab5bb018SKeith M Wesolowski 		cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual = ret;
7767ab5bb018SKeith M Wesolowski 		cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual = mwait_size;
7768ab5bb018SKeith M Wesolowski 		*ret = MWAIT_RUNNING;
7769ab5bb018SKeith M Wesolowski 		return (ret);
7770ab5bb018SKeith M Wesolowski 	} else {
7771ab5bb018SKeith M Wesolowski 		kmem_free(ret, mwait_size);
7772ab5bb018SKeith M Wesolowski 		ret = kmem_zalloc(mwait_size * 2, KM_SLEEP);
7773ab5bb018SKeith M Wesolowski 		cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual = ret;
7774ab5bb018SKeith M Wesolowski 		cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual = mwait_size * 2;
7775ab5bb018SKeith M Wesolowski 		ret = (uint32_t *)P2ROUNDUP((uintptr_t)ret, mwait_size);
7776ab5bb018SKeith M Wesolowski 		*ret = MWAIT_RUNNING;
7777ab5bb018SKeith M Wesolowski 		return (ret);
7778ab5bb018SKeith M Wesolowski 	}
7779ab5bb018SKeith M Wesolowski }
7780ab5bb018SKeith M Wesolowski 
7781ab5bb018SKeith M Wesolowski void
cpuid_mwait_free(cpu_t * cpu)7782ab5bb018SKeith M Wesolowski cpuid_mwait_free(cpu_t *cpu)
7783ab5bb018SKeith M Wesolowski {
7784ab5bb018SKeith M Wesolowski 	if (cpu->cpu_m.mcpu_cpi == NULL) {
7785ab5bb018SKeith M Wesolowski 		return;
7786ab5bb018SKeith M Wesolowski 	}
7787ab5bb018SKeith M Wesolowski 
7788ab5bb018SKeith M Wesolowski 	if (cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual != NULL &&
7789ab5bb018SKeith M Wesolowski 	    cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual > 0) {
7790ab5bb018SKeith M Wesolowski 		kmem_free(cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual,
7791ab5bb018SKeith M Wesolowski 		    cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual);
7792ab5bb018SKeith M Wesolowski 	}
7793ab5bb018SKeith M Wesolowski 
7794ab5bb018SKeith M Wesolowski 	cpu->cpu_m.mcpu_cpi->cpi_mwait.buf_actual = NULL;
7795ab5bb018SKeith M Wesolowski 	cpu->cpu_m.mcpu_cpi->cpi_mwait.size_actual = 0;
7796ab5bb018SKeith M Wesolowski }
7797ab5bb018SKeith M Wesolowski 
7798ab5bb018SKeith M Wesolowski void
patch_tsc_read(int flag)7799ab5bb018SKeith M Wesolowski patch_tsc_read(int flag)
7800ab5bb018SKeith M Wesolowski {
7801ab5bb018SKeith M Wesolowski 	size_t cnt;
7802ab5bb018SKeith M Wesolowski 
7803ab5bb018SKeith M Wesolowski 	switch (flag) {
7804ab5bb018SKeith M Wesolowski 	case TSC_NONE:
7805ab5bb018SKeith M Wesolowski 		cnt = &_no_rdtsc_end - &_no_rdtsc_start;
7806ab5bb018SKeith M Wesolowski 		(void) memcpy((void *)tsc_read, (void *)&_no_rdtsc_start, cnt);
7807ab5bb018SKeith M Wesolowski 		break;
7808ab5bb018SKeith M Wesolowski 	case TSC_RDTSC_LFENCE:
7809ab5bb018SKeith M Wesolowski 		cnt = &_tsc_lfence_end - &_tsc_lfence_start;
7810ab5bb018SKeith M Wesolowski 		(void) memcpy((void *)tsc_read,
7811ab5bb018SKeith M Wesolowski 		    (void *)&_tsc_lfence_start, cnt);
7812ab5bb018SKeith M Wesolowski 		break;
7813ab5bb018SKeith M Wesolowski 	case TSC_TSCP:
7814ab5bb018SKeith M Wesolowski 		cnt = &_tscp_end - &_tscp_start;
7815ab5bb018SKeith M Wesolowski 		(void) memcpy((void *)tsc_read, (void *)&_tscp_start, cnt);
7816ab5bb018SKeith M Wesolowski 		break;
7817ab5bb018SKeith M Wesolowski 	default:
7818ab5bb018SKeith M Wesolowski 		/* Bail for unexpected TSC types. (TSC_NONE covers 0) */
7819ab5bb018SKeith M Wesolowski 		cmn_err(CE_PANIC, "Unrecogized TSC type: %d", flag);
7820ab5bb018SKeith M Wesolowski 		break;
7821ab5bb018SKeith M Wesolowski 	}
7822ab5bb018SKeith M Wesolowski 	tsc_type = flag;
7823ab5bb018SKeith M Wesolowski }
7824ab5bb018SKeith M Wesolowski 
7825ab5bb018SKeith M Wesolowski int
cpuid_deep_cstates_supported(void)7826ab5bb018SKeith M Wesolowski cpuid_deep_cstates_supported(void)
7827ab5bb018SKeith M Wesolowski {
7828ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
7829ab5bb018SKeith M Wesolowski 	struct cpuid_regs regs;
7830ab5bb018SKeith M Wesolowski 
7831ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(CPU, CPUID_PASS_BASIC));
7832ab5bb018SKeith M Wesolowski 	ASSERT(is_x86_feature(x86_featureset, X86FSET_CPUID));
7833ab5bb018SKeith M Wesolowski 
7834ab5bb018SKeith M Wesolowski 	cpi = CPU->cpu_m.mcpu_cpi;
7835ab5bb018SKeith M Wesolowski 
7836ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7837ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7838f7d9ddd6Siximeow 	case X86_VENDOR_AMD:
7839f7d9ddd6Siximeow 	case X86_VENDOR_HYGON:
7840ab5bb018SKeith M Wesolowski 		if (cpi->cpi_xmaxeax < 0x80000007)
7841ab5bb018SKeith M Wesolowski 			return (0);
7842ab5bb018SKeith M Wesolowski 
7843ab5bb018SKeith M Wesolowski 		/*
7844ab5bb018SKeith M Wesolowski 		 * Does TSC run at a constant rate in all C-states?
7845ab5bb018SKeith M Wesolowski 		 */
7846ab5bb018SKeith M Wesolowski 		regs.cp_eax = 0x80000007;
7847ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(&regs);
7848ab5bb018SKeith M Wesolowski 		return (regs.cp_edx & CPUID_TSC_CSTATE_INVARIANCE);
7849ab5bb018SKeith M Wesolowski 
7850ab5bb018SKeith M Wesolowski 	default:
7851ab5bb018SKeith M Wesolowski 		return (0);
7852ab5bb018SKeith M Wesolowski 	}
7853ab5bb018SKeith M Wesolowski }
7854ab5bb018SKeith M Wesolowski 
7855ab5bb018SKeith M Wesolowski #endif	/* !__xpv */
7856ab5bb018SKeith M Wesolowski 
7857ab5bb018SKeith M Wesolowski void
post_startup_cpu_fixups(void)7858ab5bb018SKeith M Wesolowski post_startup_cpu_fixups(void)
7859ab5bb018SKeith M Wesolowski {
7860ab5bb018SKeith M Wesolowski #ifndef __xpv
7861ab5bb018SKeith M Wesolowski 	/*
7862ab5bb018SKeith M Wesolowski 	 * Some AMD processors support C1E state. Entering this state will
7863ab5bb018SKeith M Wesolowski 	 * cause the local APIC timer to stop, which we can't deal with at
7864ab5bb018SKeith M Wesolowski 	 * this time.
7865ab5bb018SKeith M Wesolowski 	 */
7866ab5bb018SKeith M Wesolowski 	if (cpuid_getvendor(CPU) == X86_VENDOR_AMD) {
7867ab5bb018SKeith M Wesolowski 		on_trap_data_t otd;
7868ab5bb018SKeith M Wesolowski 		uint64_t reg;
7869ab5bb018SKeith M Wesolowski 
7870ab5bb018SKeith M Wesolowski 		if (!on_trap(&otd, OT_DATA_ACCESS)) {
7871ab5bb018SKeith M Wesolowski 			reg = rdmsr(MSR_AMD_INT_PENDING_CMP_HALT);
7872ab5bb018SKeith M Wesolowski 			/* Disable C1E state if it is enabled by BIOS */
7873ab5bb018SKeith M Wesolowski 			if ((reg >> AMD_ACTONCMPHALT_SHIFT) &
7874ab5bb018SKeith M Wesolowski 			    AMD_ACTONCMPHALT_MASK) {
7875ab5bb018SKeith M Wesolowski 				reg &= ~(AMD_ACTONCMPHALT_MASK <<
7876ab5bb018SKeith M Wesolowski 				    AMD_ACTONCMPHALT_SHIFT);
7877ab5bb018SKeith M Wesolowski 				wrmsr(MSR_AMD_INT_PENDING_CMP_HALT, reg);
7878ab5bb018SKeith M Wesolowski 			}
7879ab5bb018SKeith M Wesolowski 		}
7880ab5bb018SKeith M Wesolowski 		no_trap();
7881ab5bb018SKeith M Wesolowski 	}
7882ab5bb018SKeith M Wesolowski #endif	/* !__xpv */
7883ab5bb018SKeith M Wesolowski }
7884ab5bb018SKeith M Wesolowski 
7885ab5bb018SKeith M Wesolowski void
enable_pcid(void)7886ab5bb018SKeith M Wesolowski enable_pcid(void)
7887ab5bb018SKeith M Wesolowski {
7888ab5bb018SKeith M Wesolowski 	if (x86_use_pcid == -1)
7889ab5bb018SKeith M Wesolowski 		x86_use_pcid = is_x86_feature(x86_featureset, X86FSET_PCID);
7890ab5bb018SKeith M Wesolowski 
7891ab5bb018SKeith M Wesolowski 	if (x86_use_invpcid == -1) {
7892ab5bb018SKeith M Wesolowski 		x86_use_invpcid = is_x86_feature(x86_featureset,
7893ab5bb018SKeith M Wesolowski 		    X86FSET_INVPCID);
7894ab5bb018SKeith M Wesolowski 	}
7895ab5bb018SKeith M Wesolowski 
7896ab5bb018SKeith M Wesolowski 	if (!x86_use_pcid)
7897ab5bb018SKeith M Wesolowski 		return;
7898ab5bb018SKeith M Wesolowski 
7899ab5bb018SKeith M Wesolowski 	/*
7900ab5bb018SKeith M Wesolowski 	 * Intel say that on setting PCIDE, it immediately starts using the PCID
7901ab5bb018SKeith M Wesolowski 	 * bits; better make sure there's nothing there.
7902ab5bb018SKeith M Wesolowski 	 */
7903ab5bb018SKeith M Wesolowski 	ASSERT((getcr3() & MMU_PAGEOFFSET) == PCID_NONE);
7904ab5bb018SKeith M Wesolowski 
7905ab5bb018SKeith M Wesolowski 	setcr4(getcr4() | CR4_PCIDE);
7906ab5bb018SKeith M Wesolowski }
7907ab5bb018SKeith M Wesolowski 
7908ab5bb018SKeith M Wesolowski /*
7909ab5bb018SKeith M Wesolowski  * Setup necessary registers to enable XSAVE feature on this processor.
7910ab5bb018SKeith M Wesolowski  * This function needs to be called early enough, so that no xsave/xrstor
7911ab5bb018SKeith M Wesolowski  * ops will execute on the processor before the MSRs are properly set up.
7912ab5bb018SKeith M Wesolowski  *
7913ab5bb018SKeith M Wesolowski  * Current implementation has the following assumption:
7914ab5bb018SKeith M Wesolowski  * - cpuid_pass_basic() is done, so that X86 features are known.
7915ab5bb018SKeith M Wesolowski  * - fpu_probe() is done, so that fp_save_mech is chosen.
7916ab5bb018SKeith M Wesolowski  */
7917ab5bb018SKeith M Wesolowski void
xsave_setup_msr(cpu_t * cpu)7918ab5bb018SKeith M Wesolowski xsave_setup_msr(cpu_t *cpu)
7919ab5bb018SKeith M Wesolowski {
7920ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_BASIC));
7921ab5bb018SKeith M Wesolowski 	ASSERT(fp_save_mech == FP_XSAVE);
7922ab5bb018SKeith M Wesolowski 	ASSERT(is_x86_feature(x86_featureset, X86FSET_XSAVE));
7923ab5bb018SKeith M Wesolowski 
7924ab5bb018SKeith M Wesolowski 	/* Enable OSXSAVE in CR4. */
7925ab5bb018SKeith M Wesolowski 	setcr4(getcr4() | CR4_OSXSAVE);
7926ab5bb018SKeith M Wesolowski 	/*
7927ab5bb018SKeith M Wesolowski 	 * Update SW copy of ECX, so that /dev/cpu/self/cpuid will report
7928ab5bb018SKeith M Wesolowski 	 * correct value.
7929ab5bb018SKeith M Wesolowski 	 */
7930ab5bb018SKeith M Wesolowski 	cpu->cpu_m.mcpu_cpi->cpi_std[1].cp_ecx |= CPUID_INTC_ECX_OSXSAVE;
7931ab5bb018SKeith M Wesolowski 	setup_xfem();
7932ab5bb018SKeith M Wesolowski }
7933ab5bb018SKeith M Wesolowski 
7934ab5bb018SKeith M Wesolowski /*
7935ab5bb018SKeith M Wesolowski  * Starting with the Westmere processor the local
7936ab5bb018SKeith M Wesolowski  * APIC timer will continue running in all C-states,
7937ab5bb018SKeith M Wesolowski  * including the deepest C-states.
7938ab5bb018SKeith M Wesolowski  */
7939ab5bb018SKeith M Wesolowski int
cpuid_arat_supported(void)7940ab5bb018SKeith M Wesolowski cpuid_arat_supported(void)
7941ab5bb018SKeith M Wesolowski {
7942ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
7943ab5bb018SKeith M Wesolowski 	struct cpuid_regs regs;
7944ab5bb018SKeith M Wesolowski 
7945ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(CPU, CPUID_PASS_BASIC));
7946ab5bb018SKeith M Wesolowski 	ASSERT(is_x86_feature(x86_featureset, X86FSET_CPUID));
7947ab5bb018SKeith M Wesolowski 
7948ab5bb018SKeith M Wesolowski 	cpi = CPU->cpu_m.mcpu_cpi;
7949ab5bb018SKeith M Wesolowski 
7950ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
7951ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
7952f7d9ddd6Siximeow 	case X86_VENDOR_AMD:
7953f7d9ddd6Siximeow 	case X86_VENDOR_HYGON:
7954ab5bb018SKeith M Wesolowski 		/*
7955ab5bb018SKeith M Wesolowski 		 * Always-running Local APIC Timer is
7956ab5bb018SKeith M Wesolowski 		 * indicated by CPUID.6.EAX[2].
7957ab5bb018SKeith M Wesolowski 		 */
7958ab5bb018SKeith M Wesolowski 		if (cpi->cpi_maxeax >= 6) {
7959ab5bb018SKeith M Wesolowski 			regs.cp_eax = 6;
7960ab5bb018SKeith M Wesolowski 			(void) cpuid_insn(NULL, &regs);
7961ab5bb018SKeith M Wesolowski 			return (regs.cp_eax & CPUID_INTC_EAX_ARAT);
7962ab5bb018SKeith M Wesolowski 		} else {
7963ab5bb018SKeith M Wesolowski 			return (0);
7964ab5bb018SKeith M Wesolowski 		}
7965ab5bb018SKeith M Wesolowski 	default:
7966ab5bb018SKeith M Wesolowski 		return (0);
7967ab5bb018SKeith M Wesolowski 	}
7968ab5bb018SKeith M Wesolowski }
7969ab5bb018SKeith M Wesolowski 
7970ab5bb018SKeith M Wesolowski /*
7971ab5bb018SKeith M Wesolowski  * Check support for Intel ENERGY_PERF_BIAS feature
7972ab5bb018SKeith M Wesolowski  */
7973ab5bb018SKeith M Wesolowski int
cpuid_iepb_supported(struct cpu * cp)7974ab5bb018SKeith M Wesolowski cpuid_iepb_supported(struct cpu *cp)
7975ab5bb018SKeith M Wesolowski {
7976ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cp->cpu_m.mcpu_cpi;
7977ab5bb018SKeith M Wesolowski 	struct cpuid_regs regs;
7978ab5bb018SKeith M Wesolowski 
7979ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cp, CPUID_PASS_BASIC));
7980ab5bb018SKeith M Wesolowski 	ASSERT(is_x86_feature(x86_featureset, X86FSET_CPUID));
7981ab5bb018SKeith M Wesolowski 
7982ab5bb018SKeith M Wesolowski 	if (!(is_x86_feature(x86_featureset, X86FSET_MSR))) {
7983ab5bb018SKeith M Wesolowski 		return (0);
7984ab5bb018SKeith M Wesolowski 	}
7985ab5bb018SKeith M Wesolowski 
7986ab5bb018SKeith M Wesolowski 	/*
7987ab5bb018SKeith M Wesolowski 	 * Intel ENERGY_PERF_BIAS MSR is indicated by
7988ab5bb018SKeith M Wesolowski 	 * capability bit CPUID.6.ECX.3
7989ab5bb018SKeith M Wesolowski 	 */
7990ab5bb018SKeith M Wesolowski 	if ((cpi->cpi_vendor != X86_VENDOR_Intel) || (cpi->cpi_maxeax < 6))
7991ab5bb018SKeith M Wesolowski 		return (0);
7992ab5bb018SKeith M Wesolowski 
7993ab5bb018SKeith M Wesolowski 	regs.cp_eax = 0x6;
7994ab5bb018SKeith M Wesolowski 	(void) cpuid_insn(NULL, &regs);
7995ab5bb018SKeith M Wesolowski 	return (regs.cp_ecx & CPUID_INTC_ECX_PERFBIAS);
7996ab5bb018SKeith M Wesolowski }
7997ab5bb018SKeith M Wesolowski 
7998ab5bb018SKeith M Wesolowski /*
7999ab5bb018SKeith M Wesolowski  * Check support for TSC deadline timer
8000ab5bb018SKeith M Wesolowski  *
8001ab5bb018SKeith M Wesolowski  * TSC deadline timer provides a superior software programming
8002ab5bb018SKeith M Wesolowski  * model over local APIC timer that eliminates "time drifts".
8003ab5bb018SKeith M Wesolowski  * Instead of specifying a relative time, software specifies an
8004ab5bb018SKeith M Wesolowski  * absolute time as the target at which the processor should
8005ab5bb018SKeith M Wesolowski  * generate a timer event.
8006ab5bb018SKeith M Wesolowski  */
8007ab5bb018SKeith M Wesolowski int
cpuid_deadline_tsc_supported(void)8008ab5bb018SKeith M Wesolowski cpuid_deadline_tsc_supported(void)
8009ab5bb018SKeith M Wesolowski {
8010ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = CPU->cpu_m.mcpu_cpi;
8011ab5bb018SKeith M Wesolowski 	struct cpuid_regs regs;
8012ab5bb018SKeith M Wesolowski 
8013ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(CPU, CPUID_PASS_BASIC));
8014ab5bb018SKeith M Wesolowski 	ASSERT(is_x86_feature(x86_featureset, X86FSET_CPUID));
8015ab5bb018SKeith M Wesolowski 
8016ab5bb018SKeith M Wesolowski 	switch (cpi->cpi_vendor) {
8017ab5bb018SKeith M Wesolowski 	case X86_VENDOR_Intel:
8018ab5bb018SKeith M Wesolowski 		if (cpi->cpi_maxeax >= 1) {
8019ab5bb018SKeith M Wesolowski 			regs.cp_eax = 1;
8020ab5bb018SKeith M Wesolowski 			(void) cpuid_insn(NULL, &regs);
8021ab5bb018SKeith M Wesolowski 			return (regs.cp_ecx & CPUID_DEADLINE_TSC);
8022ab5bb018SKeith M Wesolowski 		} else {
8023ab5bb018SKeith M Wesolowski 			return (0);
8024ab5bb018SKeith M Wesolowski 		}
8025ab5bb018SKeith M Wesolowski 	default:
8026ab5bb018SKeith M Wesolowski 		return (0);
8027ab5bb018SKeith M Wesolowski 	}
8028ab5bb018SKeith M Wesolowski }
8029ab5bb018SKeith M Wesolowski 
8030ab5bb018SKeith M Wesolowski #if !defined(__xpv)
8031ab5bb018SKeith M Wesolowski /*
8032ab5bb018SKeith M Wesolowski  * Patch in versions of bcopy for high performance Intel Nhm processors
8033ab5bb018SKeith M Wesolowski  * and later...
8034ab5bb018SKeith M Wesolowski  */
8035ab5bb018SKeith M Wesolowski void
patch_memops(uint_t vendor)8036ab5bb018SKeith M Wesolowski patch_memops(uint_t vendor)
8037ab5bb018SKeith M Wesolowski {
8038ab5bb018SKeith M Wesolowski 	size_t cnt, i;
8039ab5bb018SKeith M Wesolowski 	caddr_t to, from;
8040ab5bb018SKeith M Wesolowski 
8041ab5bb018SKeith M Wesolowski 	if ((vendor == X86_VENDOR_Intel) &&
8042ab5bb018SKeith M Wesolowski 	    is_x86_feature(x86_featureset, X86FSET_SSE4_2)) {
8043ab5bb018SKeith M Wesolowski 		cnt = &bcopy_patch_end - &bcopy_patch_start;
8044ab5bb018SKeith M Wesolowski 		to = &bcopy_ck_size;
8045ab5bb018SKeith M Wesolowski 		from = &bcopy_patch_start;
8046ab5bb018SKeith M Wesolowski 		for (i = 0; i < cnt; i++) {
8047ab5bb018SKeith M Wesolowski 			*to++ = *from++;
8048ab5bb018SKeith M Wesolowski 		}
8049ab5bb018SKeith M Wesolowski 	}
8050ab5bb018SKeith M Wesolowski }
8051ab5bb018SKeith M Wesolowski #endif  /*  !__xpv */
8052ab5bb018SKeith M Wesolowski 
8053ab5bb018SKeith M Wesolowski /*
8054ab5bb018SKeith M Wesolowski  * We're being asked to tell the system how many bits are required to represent
8055ab5bb018SKeith M Wesolowski  * the various thread and strand IDs. While it's tempting to derive this based
8056ab5bb018SKeith M Wesolowski  * on the values in cpi_ncore_per_chip and cpi_ncpu_per_chip, that isn't quite
8057ab5bb018SKeith M Wesolowski  * correct. Instead, this needs to be based on the number of bits that the APIC
8058ab5bb018SKeith M Wesolowski  * allows for these different configurations. We only update these to a larger
8059ab5bb018SKeith M Wesolowski  * value if we find one.
8060ab5bb018SKeith M Wesolowski  */
8061ab5bb018SKeith M Wesolowski void
cpuid_get_ext_topo(cpu_t * cpu,uint_t * core_nbits,uint_t * strand_nbits)8062ab5bb018SKeith M Wesolowski cpuid_get_ext_topo(cpu_t *cpu, uint_t *core_nbits, uint_t *strand_nbits)
8063ab5bb018SKeith M Wesolowski {
8064ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi;
8065ab5bb018SKeith M Wesolowski 
8066ab5bb018SKeith M Wesolowski 	VERIFY(cpuid_checkpass(CPU, CPUID_PASS_BASIC));
8067ab5bb018SKeith M Wesolowski 	cpi = cpu->cpu_m.mcpu_cpi;
8068ab5bb018SKeith M Wesolowski 
8069ab5bb018SKeith M Wesolowski 	if (cpi->cpi_ncore_bits > *core_nbits) {
8070ab5bb018SKeith M Wesolowski 		*core_nbits = cpi->cpi_ncore_bits;
8071ab5bb018SKeith M Wesolowski 	}
8072ab5bb018SKeith M Wesolowski 
8073ab5bb018SKeith M Wesolowski 	if (cpi->cpi_nthread_bits > *strand_nbits) {
8074ab5bb018SKeith M Wesolowski 		*strand_nbits = cpi->cpi_nthread_bits;
8075ab5bb018SKeith M Wesolowski 	}
8076ab5bb018SKeith M Wesolowski }
8077ab5bb018SKeith M Wesolowski 
8078ab5bb018SKeith M Wesolowski void
cpuid_pass_ucode(cpu_t * cpu,uchar_t * fset)8079ab5bb018SKeith M Wesolowski cpuid_pass_ucode(cpu_t *cpu, uchar_t *fset)
8080ab5bb018SKeith M Wesolowski {
8081ab5bb018SKeith M Wesolowski 	struct cpuid_info *cpi = cpu->cpu_m.mcpu_cpi;
8082ab5bb018SKeith M Wesolowski 	struct cpuid_regs cp;
8083ab5bb018SKeith M Wesolowski 
8084ab5bb018SKeith M Wesolowski 	/*
8085ab5bb018SKeith M Wesolowski 	 * Reread the CPUID portions that we need for various security
8086ab5bb018SKeith M Wesolowski 	 * information.
8087ab5bb018SKeith M Wesolowski 	 */
8088c7c59243SHans Rosenfeld 	switch (cpi->cpi_vendor) {
8089c7c59243SHans Rosenfeld 	case X86_VENDOR_Intel:
8090ab5bb018SKeith M Wesolowski 		/*
8091ab5bb018SKeith M Wesolowski 		 * Check if we now have leaf 7 available to us.
8092ab5bb018SKeith M Wesolowski 		 */
8093ab5bb018SKeith M Wesolowski 		if (cpi->cpi_maxeax < 7) {
8094ab5bb018SKeith M Wesolowski 			bzero(&cp, sizeof (cp));
8095ab5bb018SKeith M Wesolowski 			cp.cp_eax = 0;
8096ab5bb018SKeith M Wesolowski 			cpi->cpi_maxeax = __cpuid_insn(&cp);
8097ab5bb018SKeith M Wesolowski 			if (cpi->cpi_maxeax < 7)
8098c7c59243SHans Rosenfeld 				break;
8099ab5bb018SKeith M Wesolowski 		}
8100ab5bb018SKeith M Wesolowski 
8101ab5bb018SKeith M Wesolowski 		bzero(&cp, sizeof (cp));
8102ab5bb018SKeith M Wesolowski 		cp.cp_eax = 7;
8103ab5bb018SKeith M Wesolowski 		cp.cp_ecx = 0;
8104ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(&cp);
8105ab5bb018SKeith M Wesolowski 		cpi->cpi_std[7] = cp;
8106c7c59243SHans Rosenfeld 		break;
8107c7c59243SHans Rosenfeld 
8108c7c59243SHans Rosenfeld 	case X86_VENDOR_AMD:
8109c7c59243SHans Rosenfeld 	case X86_VENDOR_HYGON:
8110ab5bb018SKeith M Wesolowski 		/* No xcpuid support */
8111ab5bb018SKeith M Wesolowski 		if (cpi->cpi_family < 5 ||
8112ab5bb018SKeith M Wesolowski 		    (cpi->cpi_family == 5 && cpi->cpi_model < 1))
8113c7c59243SHans Rosenfeld 			break;
8114ab5bb018SKeith M Wesolowski 
8115ab5bb018SKeith M Wesolowski 		if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_8) {
8116ab5bb018SKeith M Wesolowski 			bzero(&cp, sizeof (cp));
8117ab5bb018SKeith M Wesolowski 			cp.cp_eax = CPUID_LEAF_EXT_0;
8118ab5bb018SKeith M Wesolowski 			cpi->cpi_xmaxeax = __cpuid_insn(&cp);
8119c7c59243SHans Rosenfeld 			if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_8)
8120c7c59243SHans Rosenfeld 				break;
8121ab5bb018SKeith M Wesolowski 		}
8122ab5bb018SKeith M Wesolowski 
8123651a12cbSRobert Mustacchi 		/*
8124651a12cbSRobert Mustacchi 		 * Most AMD features are in leaf 8. Automatic IBRS was added in
8125651a12cbSRobert Mustacchi 		 * leaf 0x21. So we also check that.
8126651a12cbSRobert Mustacchi 		 */
8127ab5bb018SKeith M Wesolowski 		bzero(&cp, sizeof (cp));
8128ab5bb018SKeith M Wesolowski 		cp.cp_eax = CPUID_LEAF_EXT_8;
8129ab5bb018SKeith M Wesolowski 		(void) __cpuid_insn(&cp);
8130ab5bb018SKeith M Wesolowski 		platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_8, &cp);
8131ab5bb018SKeith M Wesolowski 		cpi->cpi_extd[8] = cp;
8132651a12cbSRobert Mustacchi 
8133c7c59243SHans Rosenfeld 		if (cpi->cpi_xmaxeax < CPUID_LEAF_EXT_21)
8134c7c59243SHans Rosenfeld 			break;
8135651a12cbSRobert Mustacchi 
8136651a12cbSRobert Mustacchi 		bzero(&cp, sizeof (cp));
8137651a12cbSRobert Mustacchi 		cp.cp_eax = CPUID_LEAF_EXT_21;
8138651a12cbSRobert Mustacchi 		(void) __cpuid_insn(&cp);
8139651a12cbSRobert Mustacchi 		platform_cpuid_mangle(cpi->cpi_vendor, CPUID_LEAF_EXT_21, &cp);
8140651a12cbSRobert Mustacchi 		cpi->cpi_extd[0x21] = cp;
8141c7c59243SHans Rosenfeld 		break;
8142c7c59243SHans Rosenfeld 
8143c7c59243SHans Rosenfeld 	default:
8144ab5bb018SKeith M Wesolowski 		/*
8145ab5bb018SKeith M Wesolowski 		 * Nothing to do here. Return an empty set which has already
8146ab5bb018SKeith M Wesolowski 		 * been zeroed for us.
8147ab5bb018SKeith M Wesolowski 		 */
8148ab5bb018SKeith M Wesolowski 		return;
8149ab5bb018SKeith M Wesolowski 	}
8150c7c59243SHans Rosenfeld 
8151ab5bb018SKeith M Wesolowski 	cpuid_scan_security(cpu, fset);
8152ab5bb018SKeith M Wesolowski }
8153ab5bb018SKeith M Wesolowski 
8154ab5bb018SKeith M Wesolowski /* ARGSUSED */
8155ab5bb018SKeith M Wesolowski static int
cpuid_post_ucodeadm_xc(xc_arg_t arg0,xc_arg_t arg1,xc_arg_t arg2)8156ab5bb018SKeith M Wesolowski cpuid_post_ucodeadm_xc(xc_arg_t arg0, xc_arg_t arg1, xc_arg_t arg2)
8157ab5bb018SKeith M Wesolowski {
8158ab5bb018SKeith M Wesolowski 	uchar_t *fset;
8159ab5bb018SKeith M Wesolowski 	boolean_t first_pass = (boolean_t)arg1;
8160ab5bb018SKeith M Wesolowski 
8161ab5bb018SKeith M Wesolowski 	fset = (uchar_t *)(arg0 + sizeof (x86_featureset) * CPU->cpu_id);
8162ab5bb018SKeith M Wesolowski 	if (first_pass && CPU->cpu_id != 0)
8163ab5bb018SKeith M Wesolowski 		return (0);
8164ab5bb018SKeith M Wesolowski 	if (!first_pass && CPU->cpu_id == 0)
8165ab5bb018SKeith M Wesolowski 		return (0);
8166ab5bb018SKeith M Wesolowski 	cpuid_pass_ucode(CPU, fset);
8167ab5bb018SKeith M Wesolowski 
8168ab5bb018SKeith M Wesolowski 	return (0);
8169ab5bb018SKeith M Wesolowski }
8170ab5bb018SKeith M Wesolowski 
8171ab5bb018SKeith M Wesolowski /*
8172ab5bb018SKeith M Wesolowski  * After a microcode update where the version has changed, then we need to
8173ab5bb018SKeith M Wesolowski  * rescan CPUID. To do this we check every CPU to make sure that they have the
8174ab5bb018SKeith M Wesolowski  * same microcode. Then we perform a cross call to all such CPUs. It's the
8175ab5bb018SKeith M Wesolowski  * caller's job to make sure that no one else can end up doing an update while
8176ab5bb018SKeith M Wesolowski  * this is going on.
8177ab5bb018SKeith M Wesolowski  *
8178ab5bb018SKeith M Wesolowski  * We assume that the system is microcode capable if we're called.
8179ab5bb018SKeith M Wesolowski  */
8180ab5bb018SKeith M Wesolowski void
cpuid_post_ucodeadm(void)8181ab5bb018SKeith M Wesolowski cpuid_post_ucodeadm(void)
8182ab5bb018SKeith M Wesolowski {
8183ab5bb018SKeith M Wesolowski 	uint32_t rev;
8184ab5bb018SKeith M Wesolowski 	int i;
8185ab5bb018SKeith M Wesolowski 	struct cpu *cpu;
8186ab5bb018SKeith M Wesolowski 	cpuset_t cpuset;
8187ab5bb018SKeith M Wesolowski 	void *argdata;
8188ab5bb018SKeith M Wesolowski 	uchar_t *f0;
8189ab5bb018SKeith M Wesolowski 
8190ab5bb018SKeith M Wesolowski 	argdata = kmem_zalloc(sizeof (x86_featureset) * NCPU, KM_SLEEP);
8191ab5bb018SKeith M Wesolowski 
8192ab5bb018SKeith M Wesolowski 	mutex_enter(&cpu_lock);
8193ab5bb018SKeith M Wesolowski 	cpu = cpu_get(0);
8194ab5bb018SKeith M Wesolowski 	rev = cpu->cpu_m.mcpu_ucode_info->cui_rev;
8195ab5bb018SKeith M Wesolowski 	CPUSET_ONLY(cpuset, 0);
8196ab5bb018SKeith M Wesolowski 	for (i = 1; i < max_ncpus; i++) {
8197ab5bb018SKeith M Wesolowski 		if ((cpu = cpu_get(i)) == NULL)
8198ab5bb018SKeith M Wesolowski 			continue;
8199ab5bb018SKeith M Wesolowski 
8200ab5bb018SKeith M Wesolowski 		if (cpu->cpu_m.mcpu_ucode_info->cui_rev != rev) {
8201ab5bb018SKeith M Wesolowski 			panic("post microcode update CPU %d has differing "
8202ab5bb018SKeith M Wesolowski 			    "microcode revision (%u) from CPU 0 (%u)",
8203ab5bb018SKeith M Wesolowski 			    i, cpu->cpu_m.mcpu_ucode_info->cui_rev, rev);
8204ab5bb018SKeith M Wesolowski 		}
8205ab5bb018SKeith M Wesolowski 		CPUSET_ADD(cpuset, i);
8206ab5bb018SKeith M Wesolowski 	}
8207ab5bb018SKeith M Wesolowski 
8208ab5bb018SKeith M Wesolowski 	/*
8209ab5bb018SKeith M Wesolowski 	 * We do the cross calls in two passes. The first pass is only for the
8210ab5bb018SKeith M Wesolowski 	 * boot CPU. The second pass is for all of the other CPUs. This allows
8211ab5bb018SKeith M Wesolowski 	 * the boot CPU to go through and change behavior related to patching or
8212ab5bb018SKeith M Wesolowski 	 * whether or not Enhanced IBRS needs to be enabled and then allow all
8213ab5bb018SKeith M Wesolowski 	 * other CPUs to follow suit.
8214ab5bb018SKeith M Wesolowski 	 */
8215ab5bb018SKeith M Wesolowski 	kpreempt_disable();
8216ab5bb018SKeith M Wesolowski 	xc_sync((xc_arg_t)argdata, B_TRUE, 0, CPUSET2BV(cpuset),
8217ab5bb018SKeith M Wesolowski 	    cpuid_post_ucodeadm_xc);
8218ab5bb018SKeith M Wesolowski 	xc_sync((xc_arg_t)argdata, B_FALSE, 0, CPUSET2BV(cpuset),
8219ab5bb018SKeith M Wesolowski 	    cpuid_post_ucodeadm_xc);
8220ab5bb018SKeith M Wesolowski 	kpreempt_enable();
8221ab5bb018SKeith M Wesolowski 
8222ab5bb018SKeith M Wesolowski 	/*
8223ab5bb018SKeith M Wesolowski 	 * OK, now look at each CPU and see if their feature sets are equal.
8224ab5bb018SKeith M Wesolowski 	 */
8225ab5bb018SKeith M Wesolowski 	f0 = argdata;
8226ab5bb018SKeith M Wesolowski 	for (i = 1; i < max_ncpus; i++) {
8227ab5bb018SKeith M Wesolowski 		uchar_t *fset;
8228ab5bb018SKeith M Wesolowski 		if (!CPU_IN_SET(cpuset, i))
8229ab5bb018SKeith M Wesolowski 			continue;
8230ab5bb018SKeith M Wesolowski 
8231ab5bb018SKeith M Wesolowski 		fset = (uchar_t *)((uintptr_t)argdata +
8232ab5bb018SKeith M Wesolowski 		    sizeof (x86_featureset) * i);
8233ab5bb018SKeith M Wesolowski 
8234ab5bb018SKeith M Wesolowski 		if (!compare_x86_featureset(f0, fset)) {
8235ab5bb018SKeith M Wesolowski 			panic("Post microcode update CPU %d has "
8236ab5bb018SKeith M Wesolowski 			    "differing security feature (%p) set from CPU 0 "
8237ab5bb018SKeith M Wesolowski 			    "(%p), not appending to feature set", i,
8238ab5bb018SKeith M Wesolowski 			    (void *)fset, (void *)f0);
8239ab5bb018SKeith M Wesolowski 		}
8240ab5bb018SKeith M Wesolowski 	}
8241ab5bb018SKeith M Wesolowski 
8242ab5bb018SKeith M Wesolowski 	mutex_exit(&cpu_lock);
8243ab5bb018SKeith M Wesolowski 
8244ab5bb018SKeith M Wesolowski 	for (i = 0; i < NUM_X86_FEATURES; i++) {
8245ab5bb018SKeith M Wesolowski 		cmn_err(CE_CONT, "?post-ucode x86_feature: %s\n",
8246ab5bb018SKeith M Wesolowski 		    x86_feature_names[i]);
8247ab5bb018SKeith M Wesolowski 		if (is_x86_feature(f0, i)) {
8248ab5bb018SKeith M Wesolowski 			add_x86_feature(x86_featureset, i);
8249ab5bb018SKeith M Wesolowski 		}
8250ab5bb018SKeith M Wesolowski 	}
8251ab5bb018SKeith M Wesolowski 	kmem_free(argdata, sizeof (x86_featureset) * NCPU);
8252ab5bb018SKeith M Wesolowski }
8253ab5bb018SKeith M Wesolowski 
8254ab5bb018SKeith M Wesolowski typedef void (*cpuid_pass_f)(cpu_t *, void *);
8255ab5bb018SKeith M Wesolowski 
8256ab5bb018SKeith M Wesolowski typedef struct cpuid_pass_def {
8257ab5bb018SKeith M Wesolowski 	cpuid_pass_t cpd_pass;
8258ab5bb018SKeith M Wesolowski 	cpuid_pass_f cpd_func;
8259ab5bb018SKeith M Wesolowski } cpuid_pass_def_t;
8260ab5bb018SKeith M Wesolowski 
8261ab5bb018SKeith M Wesolowski /*
8262ab5bb018SKeith M Wesolowski  * See block comment at the top; note that cpuid_pass_ucode is not a pass in the
8263ab5bb018SKeith M Wesolowski  * normal sense and should not appear here.
8264ab5bb018SKeith M Wesolowski  */
8265ab5bb018SKeith M Wesolowski static const cpuid_pass_def_t cpuid_pass_defs[] = {
8266ab5bb018SKeith M Wesolowski 	{ CPUID_PASS_PRELUDE, cpuid_pass_prelude },
8267ab5bb018SKeith M Wesolowski 	{ CPUID_PASS_IDENT, cpuid_pass_ident },
8268ab5bb018SKeith M Wesolowski 	{ CPUID_PASS_BASIC, cpuid_pass_basic },
8269ab5bb018SKeith M Wesolowski 	{ CPUID_PASS_EXTENDED, cpuid_pass_extended },
8270ab5bb018SKeith M Wesolowski 	{ CPUID_PASS_DYNAMIC, cpuid_pass_dynamic },
8271ab5bb018SKeith M Wesolowski 	{ CPUID_PASS_RESOLVE, cpuid_pass_resolve },
8272ab5bb018SKeith M Wesolowski };
8273ab5bb018SKeith M Wesolowski 
8274ab5bb018SKeith M Wesolowski void
cpuid_execpass(cpu_t * cp,cpuid_pass_t pass,void * arg)8275ab5bb018SKeith M Wesolowski cpuid_execpass(cpu_t *cp, cpuid_pass_t pass, void *arg)
8276ab5bb018SKeith M Wesolowski {
8277ab5bb018SKeith M Wesolowski 	VERIFY3S(pass, !=, CPUID_PASS_NONE);
8278ab5bb018SKeith M Wesolowski 
8279ab5bb018SKeith M Wesolowski 	if (cp == NULL)
8280ab5bb018SKeith M Wesolowski 		cp = CPU;
8281ab5bb018SKeith M Wesolowski 
8282ab5bb018SKeith M Wesolowski 	/*
8283ab5bb018SKeith M Wesolowski 	 * Space statically allocated for BSP, ensure pointer is set
8284ab5bb018SKeith M Wesolowski 	 */
8285ab5bb018SKeith M Wesolowski 	if (cp->cpu_id == 0 && cp->cpu_m.mcpu_cpi == NULL)
8286ab5bb018SKeith M Wesolowski 		cp->cpu_m.mcpu_cpi = &cpuid_info0;
8287ab5bb018SKeith M Wesolowski 
8288ab5bb018SKeith M Wesolowski 	ASSERT(cpuid_checkpass(cp, pass - 1));
8289ab5bb018SKeith M Wesolowski 
8290ab5bb018SKeith M Wesolowski 	for (uint_t i = 0; i < ARRAY_SIZE(cpuid_pass_defs); i++) {
8291ab5bb018SKeith M Wesolowski 		if (cpuid_pass_defs[i].cpd_pass == pass) {
8292ab5bb018SKeith M Wesolowski 			cpuid_pass_defs[i].cpd_func(cp, arg);
8293ab5bb018SKeith M Wesolowski 			cp->cpu_m.mcpu_cpi->cpi_pass = pass;
8294ab5bb018SKeith M Wesolowski 			return;
8295ab5bb018SKeith M Wesolowski 		}
8296ab5bb018SKeith M Wesolowski 	}
8297ab5bb018SKeith M Wesolowski 
8298ab5bb018SKeith M Wesolowski 	panic("unable to execute invalid cpuid pass %d on cpu%d\n",
8299ab5bb018SKeith M Wesolowski 	    pass, cp->cpu_id);
8300ab5bb018SKeith M Wesolowski }
830122e4c3acSKeith M Wesolowski 
830222e4c3acSKeith M Wesolowski /*
830322e4c3acSKeith M Wesolowski  * Extract the processor family from a chiprev.  Processor families are not the
830422e4c3acSKeith M Wesolowski  * same as cpuid families; see comments above and in x86_archext.h.
830522e4c3acSKeith M Wesolowski  */
830622e4c3acSKeith M Wesolowski x86_processor_family_t
chiprev_family(const x86_chiprev_t cr)830722e4c3acSKeith M Wesolowski chiprev_family(const x86_chiprev_t cr)
830822e4c3acSKeith M Wesolowski {
830922e4c3acSKeith M Wesolowski 	return ((x86_processor_family_t)_X86_CHIPREV_FAMILY(cr));
831022e4c3acSKeith M Wesolowski }
831122e4c3acSKeith M Wesolowski 
831222e4c3acSKeith M Wesolowski /*
831322e4c3acSKeith M Wesolowski  * A chiprev matches its template if the vendor and family are identical and the
831422e4c3acSKeith M Wesolowski  * revision of the chiprev matches one of the bits set in the template.  Callers
831522e4c3acSKeith M Wesolowski  * may bitwise-OR together chiprevs of the same vendor and family to form the
831622e4c3acSKeith M Wesolowski  * template, or use the _ANY variant.  It is not possible to match chiprevs of
831722e4c3acSKeith M Wesolowski  * multiple vendors or processor families with a single call.  Note that this
831822e4c3acSKeith M Wesolowski  * function operates on processor families, not cpuid families.
831922e4c3acSKeith M Wesolowski  */
832022e4c3acSKeith M Wesolowski boolean_t
chiprev_matches(const x86_chiprev_t cr,const x86_chiprev_t template)832122e4c3acSKeith M Wesolowski chiprev_matches(const x86_chiprev_t cr, const x86_chiprev_t template)
832222e4c3acSKeith M Wesolowski {
832322e4c3acSKeith M Wesolowski 	return (_X86_CHIPREV_VENDOR(cr) == _X86_CHIPREV_VENDOR(template) &&
832422e4c3acSKeith M Wesolowski 	    _X86_CHIPREV_FAMILY(cr) == _X86_CHIPREV_FAMILY(template) &&
832522e4c3acSKeith M Wesolowski 	    (_X86_CHIPREV_REV(cr) & _X86_CHIPREV_REV(template)) != 0);
832622e4c3acSKeith M Wesolowski }
832722e4c3acSKeith M Wesolowski 
832822e4c3acSKeith M Wesolowski /*
832922e4c3acSKeith M Wesolowski  * A chiprev is at least min if the vendor and family are identical and the
833022e4c3acSKeith M Wesolowski  * revision of the chiprev is at least as recent as that of min.  Processor
833122e4c3acSKeith M Wesolowski  * families are considered unordered and cannot be compared using this function.
833222e4c3acSKeith M Wesolowski  * Note that this function operates on processor families, not cpuid families.
833322e4c3acSKeith M Wesolowski  * Use of the _ANY chiprev variant with this function is not useful; it will
833422e4c3acSKeith M Wesolowski  * always return B_FALSE if the _ANY variant is supplied as the minimum
833522e4c3acSKeith M Wesolowski  * revision.  To determine only whether a chiprev is of a given processor
833622e4c3acSKeith M Wesolowski  * family, test the return value of chiprev_family() instead.
833722e4c3acSKeith M Wesolowski  */
833822e4c3acSKeith M Wesolowski boolean_t
chiprev_at_least(const x86_chiprev_t cr,const x86_chiprev_t min)833922e4c3acSKeith M Wesolowski chiprev_at_least(const x86_chiprev_t cr, const x86_chiprev_t min)
834022e4c3acSKeith M Wesolowski {
834122e4c3acSKeith M Wesolowski 	return (_X86_CHIPREV_VENDOR(cr) == _X86_CHIPREV_VENDOR(min) &&
834222e4c3acSKeith M Wesolowski 	    _X86_CHIPREV_FAMILY(cr) == _X86_CHIPREV_FAMILY(min) &&
834322e4c3acSKeith M Wesolowski 	    _X86_CHIPREV_REV(cr) >= _X86_CHIPREV_REV(min));
834422e4c3acSKeith M Wesolowski }
834522e4c3acSKeith M Wesolowski 
834622e4c3acSKeith M Wesolowski /*
834722e4c3acSKeith M Wesolowski  * The uarch functions operate in a manner similar to the chiprev functions
834822e4c3acSKeith M Wesolowski  * above.  While it is tempting to allow these to operate on microarchitectures
834922e4c3acSKeith M Wesolowski  * produced by a specific vendor in an ordered fashion (e.g., ZEN3 is "newer"
835022e4c3acSKeith M Wesolowski  * than ZEN2), we elect not to do so because a manufacturer may supply
835122e4c3acSKeith M Wesolowski  * processors of multiple different microarchitecture families each of which may
835222e4c3acSKeith M Wesolowski  * be internally ordered but unordered with respect to those of other families.
835322e4c3acSKeith M Wesolowski  */
835422e4c3acSKeith M Wesolowski x86_uarch_t
uarchrev_uarch(const x86_uarchrev_t ur)835522e4c3acSKeith M Wesolowski uarchrev_uarch(const x86_uarchrev_t ur)
835622e4c3acSKeith M Wesolowski {
835722e4c3acSKeith M Wesolowski 	return ((x86_uarch_t)_X86_UARCHREV_UARCH(ur));
835822e4c3acSKeith M Wesolowski }
835922e4c3acSKeith M Wesolowski 
836022e4c3acSKeith M Wesolowski boolean_t
uarchrev_matches(const x86_uarchrev_t ur,const x86_uarchrev_t template)836122e4c3acSKeith M Wesolowski uarchrev_matches(const x86_uarchrev_t ur, const x86_uarchrev_t template)
836222e4c3acSKeith M Wesolowski {
836322e4c3acSKeith M Wesolowski 	return (_X86_UARCHREV_VENDOR(ur) == _X86_UARCHREV_VENDOR(template) &&
836422e4c3acSKeith M Wesolowski 	    _X86_UARCHREV_UARCH(ur) == _X86_UARCHREV_UARCH(template) &&
836522e4c3acSKeith M Wesolowski 	    (_X86_UARCHREV_REV(ur) & _X86_UARCHREV_REV(template)) != 0);
836622e4c3acSKeith M Wesolowski }
836722e4c3acSKeith M Wesolowski 
836822e4c3acSKeith M Wesolowski boolean_t
uarchrev_at_least(const x86_uarchrev_t ur,const x86_uarchrev_t min)836922e4c3acSKeith M Wesolowski uarchrev_at_least(const x86_uarchrev_t ur, const x86_uarchrev_t min)
837022e4c3acSKeith M Wesolowski {
837122e4c3acSKeith M Wesolowski 	return (_X86_UARCHREV_VENDOR(ur) == _X86_UARCHREV_VENDOR(min) &&
837222e4c3acSKeith M Wesolowski 	    _X86_UARCHREV_UARCH(ur) == _X86_UARCHREV_UARCH(min) &&
837322e4c3acSKeith M Wesolowski 	    _X86_UARCHREV_REV(ur) >= _X86_UARCHREV_REV(min));
837422e4c3acSKeith M Wesolowski }
8375dd23d762SRobert Mustacchi 
8376dd23d762SRobert Mustacchi /*
8377dd23d762SRobert Mustacchi  * Topology cache related information. This is yet another cache interface that
8378dd23d762SRobert Mustacchi  * we're exposing out intended to be used when we have either Intel Leaf 4 or
8379dd23d762SRobert Mustacchi  * AMD Leaf 8x1D (introduced with Zen 1).
8380dd23d762SRobert Mustacchi  */
8381dd23d762SRobert Mustacchi static boolean_t
cpuid_cache_topo_sup(const struct cpuid_info * cpi)8382dd23d762SRobert Mustacchi cpuid_cache_topo_sup(const struct cpuid_info *cpi)
8383dd23d762SRobert Mustacchi {
8384dd23d762SRobert Mustacchi 	switch (cpi->cpi_vendor) {
8385dd23d762SRobert Mustacchi 	case X86_VENDOR_Intel:
8386dd23d762SRobert Mustacchi 		if (cpi->cpi_maxeax >= 4) {
8387dd23d762SRobert Mustacchi 			return (B_TRUE);
8388dd23d762SRobert Mustacchi 		}
8389dd23d762SRobert Mustacchi 		break;
8390dd23d762SRobert Mustacchi 	case X86_VENDOR_AMD:
8391dd23d762SRobert Mustacchi 	case X86_VENDOR_HYGON:
8392dd23d762SRobert Mustacchi 		if (cpi->cpi_xmaxeax >= CPUID_LEAF_EXT_1d &&
8393dd23d762SRobert Mustacchi 		    is_x86_feature(x86_featureset, X86FSET_TOPOEXT)) {
8394dd23d762SRobert Mustacchi 			return (B_TRUE);
8395dd23d762SRobert Mustacchi 		}
8396dd23d762SRobert Mustacchi 		break;
8397dd23d762SRobert Mustacchi 	default:
8398dd23d762SRobert Mustacchi 		break;
8399dd23d762SRobert Mustacchi 	}
8400dd23d762SRobert Mustacchi 
8401dd23d762SRobert Mustacchi 	return (B_FALSE);
8402dd23d762SRobert Mustacchi }
8403dd23d762SRobert Mustacchi 
8404dd23d762SRobert Mustacchi int
cpuid_getncaches(struct cpu * cpu,uint32_t * ncache)8405dd23d762SRobert Mustacchi cpuid_getncaches(struct cpu *cpu, uint32_t *ncache)
8406dd23d762SRobert Mustacchi {
8407dd23d762SRobert Mustacchi 	const struct cpuid_info *cpi;
8408dd23d762SRobert Mustacchi 
8409dd23d762SRobert Mustacchi 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_DYNAMIC));
8410dd23d762SRobert Mustacchi 	cpi = cpu->cpu_m.mcpu_cpi;
8411dd23d762SRobert Mustacchi 
8412dd23d762SRobert Mustacchi 	if (!cpuid_cache_topo_sup(cpi)) {
8413dd23d762SRobert Mustacchi 		return (ENOTSUP);
8414dd23d762SRobert Mustacchi 	}
8415dd23d762SRobert Mustacchi 
8416dd23d762SRobert Mustacchi 	*ncache = cpi->cpi_cache_leaf_size;
8417dd23d762SRobert Mustacchi 	return (0);
8418dd23d762SRobert Mustacchi }
8419dd23d762SRobert Mustacchi 
8420dd23d762SRobert Mustacchi int
cpuid_getcache(struct cpu * cpu,uint32_t cno,x86_cache_t * cache)8421dd23d762SRobert Mustacchi cpuid_getcache(struct cpu *cpu, uint32_t cno, x86_cache_t *cache)
8422dd23d762SRobert Mustacchi {
8423dd23d762SRobert Mustacchi 	const struct cpuid_info *cpi;
8424dd23d762SRobert Mustacchi 	const struct cpuid_regs *cp;
8425dd23d762SRobert Mustacchi 
8426dd23d762SRobert Mustacchi 	ASSERT(cpuid_checkpass(cpu, CPUID_PASS_DYNAMIC));
8427dd23d762SRobert Mustacchi 	cpi = cpu->cpu_m.mcpu_cpi;
8428dd23d762SRobert Mustacchi 
8429dd23d762SRobert Mustacchi 	if (!cpuid_cache_topo_sup(cpi)) {
8430dd23d762SRobert Mustacchi 		return (ENOTSUP);
8431dd23d762SRobert Mustacchi 	}
8432dd23d762SRobert Mustacchi 
8433dd23d762SRobert Mustacchi 	if (cno >= cpi->cpi_cache_leaf_size) {
8434dd23d762SRobert Mustacchi 		return (EINVAL);
8435dd23d762SRobert Mustacchi 	}
8436dd23d762SRobert Mustacchi 
8437*a57aa66eSRobert Mustacchi 	bzero(cache, sizeof (x86_cache_t));
8438dd23d762SRobert Mustacchi 	cp = cpi->cpi_cache_leaves[cno];
8439dd23d762SRobert Mustacchi 	switch (CPI_CACHE_TYPE(cp)) {
8440dd23d762SRobert Mustacchi 	case CPI_CACHE_TYPE_DATA:
8441dd23d762SRobert Mustacchi 		cache->xc_type = X86_CACHE_TYPE_DATA;
8442dd23d762SRobert Mustacchi 		break;
8443dd23d762SRobert Mustacchi 	case CPI_CACHE_TYPE_INSTR:
8444dd23d762SRobert Mustacchi 		cache->xc_type = X86_CACHE_TYPE_INST;
8445dd23d762SRobert Mustacchi 		break;
8446dd23d762SRobert Mustacchi 	case CPI_CACHE_TYPE_UNIFIED:
8447dd23d762SRobert Mustacchi 		cache->xc_type = X86_CACHE_TYPE_UNIFIED;
8448dd23d762SRobert Mustacchi 		break;
8449dd23d762SRobert Mustacchi 	case CPI_CACHE_TYPE_DONE:
8450dd23d762SRobert Mustacchi 	default:
8451dd23d762SRobert Mustacchi 		return (EINVAL);
8452dd23d762SRobert Mustacchi 	}
8453dd23d762SRobert Mustacchi 	cache->xc_level = CPI_CACHE_LVL(cp);
8454dd23d762SRobert Mustacchi 	if (CPI_FULL_ASSOC_CACHE(cp) != 0) {
8455dd23d762SRobert Mustacchi 		cache->xc_flags |= X86_CACHE_F_FULL_ASSOC;
8456dd23d762SRobert Mustacchi 	}
8457dd23d762SRobert Mustacchi 	cache->xc_nparts = CPI_CACHE_PARTS(cp) + 1;
8458dd23d762SRobert Mustacchi 	/*
8459dd23d762SRobert Mustacchi 	 * The number of sets is reserved on AMD if the CPU is tagged as fully
8460dd23d762SRobert Mustacchi 	 * associative, where as it is considered valid on Intel.
8461dd23d762SRobert Mustacchi 	 */
8462dd23d762SRobert Mustacchi 	if (cpi->cpi_vendor == X86_VENDOR_AMD &&
8463dd23d762SRobert Mustacchi 	    CPI_FULL_ASSOC_CACHE(cp) != 0) {
8464dd23d762SRobert Mustacchi 		cache->xc_nsets = 1;
8465dd23d762SRobert Mustacchi 	} else {
8466dd23d762SRobert Mustacchi 		cache->xc_nsets = CPI_CACHE_SETS(cp) + 1;
8467dd23d762SRobert Mustacchi 	}
8468dd23d762SRobert Mustacchi 	cache->xc_nways = CPI_CACHE_WAYS(cp) + 1;
8469dd23d762SRobert Mustacchi 	cache->xc_line_size = CPI_CACHE_COH_LN_SZ(cp) + 1;
8470dd23d762SRobert Mustacchi 	cache->xc_size = cache->xc_nparts * cache->xc_nsets * cache->xc_nways *
8471dd23d762SRobert Mustacchi 	    cache->xc_line_size;
8472dd23d762SRobert Mustacchi 	/*
8473dd23d762SRobert Mustacchi 	 * We're looking for the number of bits to cover the number of CPUs that
8474dd23d762SRobert Mustacchi 	 * are being shared. Normally this would be the value - 1, but the CPUID
8475dd23d762SRobert Mustacchi 	 * value is encoded as the actual value minus one, so we don't modify
8476dd23d762SRobert Mustacchi 	 * this at all.
8477dd23d762SRobert Mustacchi 	 */
8478dd23d762SRobert Mustacchi 	cache->xc_apic_shift = highbit(CPI_NTHR_SHR_CACHE(cp));
8479dd23d762SRobert Mustacchi 
8480dd23d762SRobert Mustacchi 	/*
8481dd23d762SRobert Mustacchi 	 * To construct a unique ID we construct a uint64_t that looks as
8482dd23d762SRobert Mustacchi 	 * follows:
8483dd23d762SRobert Mustacchi 	 *
8484dd23d762SRobert Mustacchi 	 * [47:40] cache level
8485dd23d762SRobert Mustacchi 	 * [39:32] CPUID cache type
8486dd23d762SRobert Mustacchi 	 * [31:00] shifted APIC ID
8487dd23d762SRobert Mustacchi 	 *
8488dd23d762SRobert Mustacchi 	 * The shifted APIC ID gives us a guarantee that a given cache entry is
8489dd23d762SRobert Mustacchi 	 * unique within its peers. The other two numbers give us something that
8490dd23d762SRobert Mustacchi 	 * ensures that something is unique within the CPU. If we just had the
8491dd23d762SRobert Mustacchi 	 * APIC ID shifted over by the indicated number of bits we'd end up with
8492dd23d762SRobert Mustacchi 	 * an ID of zero for the L1I, L1D, L2, and L3.
8493dd23d762SRobert Mustacchi 	 *
8494dd23d762SRobert Mustacchi 	 * The format of this ID is private to the system and can change across
8495dd23d762SRobert Mustacchi 	 * a reboot for the time being.
8496dd23d762SRobert Mustacchi 	 */
8497dd23d762SRobert Mustacchi 	cache->xc_id = (uint64_t)cache->xc_level << 40;
8498dd23d762SRobert Mustacchi 	cache->xc_id |= (uint64_t)cache->xc_type << 32;
8499dd23d762SRobert Mustacchi 	cache->xc_id |= (uint64_t)cpi->cpi_apicid >> cache->xc_apic_shift;
8500dd23d762SRobert Mustacchi 
8501dd23d762SRobert Mustacchi 	return (0);
8502dd23d762SRobert Mustacchi }
8503