This information is provided to guide the designer in creating a PCI-Express board. No reference design is provided.
The following features are available:
Support for one root port/external device
Supported configuration for a single lanes: 1x1
Includes associated GPIOs (per device supported)
Supports 2 REFCLK Outputs
Common MODPHY and MUXing logic to port muxing with USB3 and SATA
Precision Time Measurement (PTM)
Configurable PCIe MPS of up to 256B
Datapath Design and Tradeoffs
ecause the compute module offers both PCI-Express and USB3 signals on the same set of pins, the designer must decide if the design will interface to only one type of device or if both functions are needed at different times in development. There are different options available to support a host of design configurations.
The block diagrams that follow illustrate a few examples of possible implementations that take advantage of the PCI-Express lane and the USB3.0 interface.
Enabling the compute module for either USB 3.0 or PCI-Express functionality is done at boot through the BIOS setup. Only one of the two functions is available for the developer’s usage at any given time, and changing from one function to the other requires re-flashing the BIOS with the proper binary file.
The process of flashing the BIOS will be the same irrespective of the functionality desired. The only difference will be which binary file is processed during BIOS flashing. The PCIe* specific BIOS binary is contained in the downloaded zip file, in a separate folder. Download the BIOS zip file first, then select which binary you need for your use case.
: For developers who implement both USB3 and PCIe options in hardware (resistor stuffing, signal switch, etc.), please ensure proper connectivity prior to powering up the compute module.
PCI-Express GPIO Requirements
The PCI-Express Controller requires additional GPIOs for signaling to and from the device.
The following GPIOs are required:
: PCIE_CLKREQ# and PCIE_WAKE# are default functions with the GPIOs.
It is the responsibility of the platform designer to assign unused GPIOs to PCIE_PERST# and PCIE_PFET# for functionality.
PCIE_PERST# is an output from the compute module. The requirement is that PERST# is asserted to device on reset exit until BIOS brings up (or ASL code on RTD3 exit). The platform may select any GPIO to perform the PERST# functionality with the following characteristics:
Default to GP-out, driving ‘0’ on reset
Else, must default to GP-in, with internal pull-down to “drive” ‘0’ on reset
On boot, BIOS controls the PERST# sequencing. ASL (ACPI Scripting Language) code provides PERST# control during RTD3 entry/exit flows.
The same requirements as PERST# apply to PFET.
On boot, BIOS control the PFET sequencing.
ASL code provides PFET control during RTD3 entry/exit flows.
PFET provides the platform the ability to enable/disable power to the external device in RTD3.
This is an optional signal to provide the platform the ability to enable/disable power to
the external device during Boot and RTD3.
Platform Power Sequence (Cold Boot)
The timing diagram below illustrates the platform level sequencing of the PCI-Express Controller, PCIe GPIOs to bring up device.
PCI-Express Runtime D3 (RTD3) Entry / Exit
The device D3 state represents the non-functional device power management state where the entry and exit from this state is fully managed by software. Main power can be removed from the device in this state. Conventionally, the device is put into a D3 state as part of the flow to transition the system from an S0 to Sx system sleep state.
Runtime D3 constitutes the hardware and software enhancements to put the Root Port and device into a D3cold state, even when the system is in S0, when the device is no longer needed by the software. The tolerable exit latency from RTD3 is long, given software participation in putting the Root Port and device in this power management state.
A device in RTD3 is prohibited from generating any activity other than a wake event, through the PCIE_WAKE_N pin. The device must wait until software has fully restored the device to an operational D0 state before initiating any transactions.
Access to the device’s host interface is prohibited while in RTD3. The OS and/or device driver must queue all new IO accesses while the device is in RTD3 and transition the device back to an operational state before accessing its host interface. IO queuing must be done in a manner that does not stall software, given the potentially long device recovery latency.
PCI-Express Lane Polarity Inversion
The PCI-Express Base specification requires polarity inversion to be supported independently by all receivers across a link – the differential pair of a Lane handles its own polarity inversion. Polarity inversion is applied, as needed, during the initial training sequence of a Lane; therefore, a Lane will still function correctly even if a positive (Tx+) signal from a transmitter is connected to the negative (Rx-) signal of the receiver. Polarity inversion eliminates the need to untangle a trace route to reverse a signal polarity difference within a differential pair and no special configuration settings are needed in the SoC to enable it.
It is important to note that polarity inversion does not imply direction inversion or direction reversal; that is, the Tx differential pair from one device must still connect to the Rx differential pair on the receiving device, per the PCI-Express Base Specification.
Expected Linux* / Windows* Usage for D3
The D3 state comes as part of the Advanced Configuration and Power Interface (ACPI) Device Power State. The D3 controls are part of the PCI configuration space. All PCI functions must support D3.
The D3 information can be determined from the PCI headers and confirmation space for all the PCI functions. All Linux drivers will default to using D3.
Level translation (1.8v to 3.3v)
The compute module provides native 1.8v signaling that must be translated to 3.3v to comply with PCI-Express specifications.