Recent Content
Quartus Prime Pro 25.3 Fitter Crash
Hello, I just try to compile a project migrated from version 24.2. All the IP blocks were updated and the Analysis & Synthesis phase is OK. Once the Fitter start there is a crash. Here is the crash report: Problem Details Error: Internal Error: Sub-system: RDB, File: /quartus/db/rdb/rdb_utility.cpp, Line: 1944 rval == nullptr Stack Trace: Quartus 0x100ca4: rdb_find_summary + 0xb4 (db_rdb) Quartus 0x6380d: qcu_update_flow_summary + 0x5d (comp_qcu) Quartus 0xcde36: qcu_qis_update_flow_report + 0x246 (comp_qcu) Quartus 0x31798e: QIS_REPORTER::qis_update_and_print_quartus_syn_flow_report + 0x14e (synth_qis) Quartus 0x31c581: QIS_REPORTER::run + 0x161 (synth_qis) Quartus 0x1f3d4d: QIS_RTL_STAGE::IMPL::print_synthesis_flow_report + 0x19d (synth_qis) Quartus 0x404e6: qis_synthesis_flow_report + 0x86 (synth_qis) Quartus 0x14640: TclInvokeStringCommand + 0xf0 (tcl86) Quartus 0x16442: TclNRRunCallbacks + 0x62 (tcl86) Quartus 0x17c4d: Tcl_EvalEx + 0xa1d (tcl86) Quartus 0xa6a8b: Tcl_FSEvalFileEx + 0x22b (tcl86) Quartus 0xa5136: Tcl_EvalFile + 0x36 (tcl86) Quartus 0x2306c: qexe_evaluate_tcl_script + 0x66c (comp_qexe) Quartus 0x21ba9: qexe_do_tcl + 0x8f9 (comp_qexe) Quartus 0x2a16d: qexe_run_tcl_option + 0x6cd (comp_qexe) Quartus 0x4144f: qcu_run_tcl_option + 0x6ef (comp_qcu) Quartus 0x147e: qsyn2_tcl_process_default_flow_option + 0x20e (quartus_syn) Quartus 0x29929: qexe_run + 0x629 (comp_qexe) Quartus 0x2ab98: qexe_standard_main + 0x268 (comp_qexe) Quartus 0xe784: qsyn2_main + 0x164 (quartus_syn) Quartus 0x29568: msg_initialize_out_of_memory_handler + 0x368 (ccl_msg) Quartus 0x2a772: msg_set_stack_size + 0x92 (ccl_msg) Quartus 0x2b033: mem_tbb_tracking_flush_peak + 0x773 (ccl_mem) Quartus 0x2743f: msg_exe_main + 0x17f (ccl_msg) Quartus 0x112b3: __scrt_common_main_seh + 0x10b (quartus_syn) Quartus 0x2e8d6: BaseThreadInitThunk + 0x16 (KERNEL32) Quartus 0x8c53b: RtlUserThreadStart + 0x2b (ntdll) End-trace Executable: quartus Comment: I just try to compile a project created with Quartus 24.2. I updated all IP blocks. Analysis and Synthesis work fine without error but at the beginning of Fitter Quartus crashes. System Information Platform: windows64 OS name: Windows 11 OS version: 10.0.26200 Quartus Prime Information Address bits: 64 Version: 25.3.0 Build: 109 Edition: Pro Edition Here are the last log messages (from synthesis): Info(17049): 6363 registers lost all their fanouts during netlist optimizations. Info(21057): Implemented 51475 device resources after synthesis - the final resource count might be different Info(21058): Implemented 20 input pins Info(21059): Implemented 60 output pins Info(21060): Implemented 98 bidirectional pins Info(21061): Implemented 50869 logic cells Info(21064): Implemented 340 RAM segments Info(21062): Implemented 29 DSP elements Info(21071): Implemented 1 partitions Info: Successfully synthesized partition Info: Synthesizing partition "auto_fab_0" Info(13014): Ignored 1 buffer(s) Info(13019): Ignored 1 SOFT buffer(s) Info(17016): Found the following redundant logic cells in design Info(17048): Logic cell "auto_fab_0|alt_sld_fab_0|alt_sld_fab_0|configresetfabric|conf_reset_src|universal.lc" Info(21057): Implemented 183 device resources after synthesis - the final resource count might be different Info(21058): Implemented 13 input pins Info(21059): Implemented 29 output pins Info(21061): Implemented 139 logic cells Info: Successfully synthesized partition Info: Saving post-synthesis snapshots for 2 partition(s)5Views0likes0CommentsDE1-SoC project compiles, but board programming fails 42% through
Hello, When programming the board reaches 42%, I get the following error: Info (209017): Device 2 contains JTAG ID code 0x02D120DD Error (209040): Can't access JTAG chain Error (209015): Can't configure device. Expected JTAG ID code 0x02D120DD for device 2, but found JTAG ID code 0x00000000. Make sure the location of the target device on the circuit board matches the device's location in the device chain in the Chain Description File (.cdf). I manage to program it with other designs, so it shouldn't be a physical problem. This is considering a fairly basic project. Here's the platform designer layout: Files in the project are only: The auto generated .qip file, And a top module: module DE1_SoC_Computer ( ports list); declarations // From Qsys Computer_System The_System ( .sys_sdram_pll_0_ref_clk_clk (CLOCK_50), .sys_sdram_pll_0_ref_reset_reset (KEY[0]), .sys_sdram_pll_0_sdram_clk_clk (DRAM_CLK), .new_sdram_controller_0_wire_addr (DRAM_ADDR), // new_sdram_controller_0_wire.addr .new_sdram_controller_0_wire_ba (DRAM_BA), // .ba .new_sdram_controller_0_wire_cas_n (DRAM_CAS_N), // .cas_n .new_sdram_controller_0_wire_cke (DRAM_CKE), // .cke .new_sdram_controller_0_wire_cs_n (DRAM_CS_N), // .cs_n .new_sdram_controller_0_wire_dq (DRAM_DQ), // .dq .new_sdram_controller_0_wire_dqm ({DRAM_UDQM,DRAM_LDQM}), .new_sdram_controller_0_wire_ras_n (DRAM_RAS_N), // .ras_n .new_sdram_controller_0_wire_we_n (DRAM_WE_N), // .we_n .fifo_0_out_valid (fifo_valid), // fifo_0_out.valid .fifo_0_out_data (fifo_data), // .data .fifo_0_out_ready (fifo_ready), // .ready .fifo_0_out_channel (), // .channel .fifo_0_out_error (), // .error .fifo_0_out_startofpacket (fifo_sop), // .startofpacket .fifo_0_out_endofpacket (fifo_eop), // .endofpacket .fifo_0_out_empty (fifo_empty), // .empty .mm_bridge_0_s0_waitrequest (mm_waitrequest), // mm_bridge_0_s0.waitrequest .mm_bridge_0_s0_readdata (mm_readdata), // .readdata .mm_bridge_0_s0_readdatavalid (mm_readdatavalid), // .readdatavalid .mm_bridge_0_s0_burstcount (mm_burstcount), // .burstcount .mm_bridge_0_s0_writedata (mm_writedata), // .writedata .mm_bridge_0_s0_address (mm_address), // .address .mm_bridge_0_s0_write (mm_write), // .write .mm_bridge_0_s0_read (mm_read), // .read .mm_bridge_0_s0_byteenable (mm_byteenable), // .byteenable .mm_bridge_0_s0_debugaccess (mm_debugaccess), // .debugaccess // HPS DDR3 .hps_0_ddr_mem_a (HPS_DDR3_ADDR), .hps_0_ddr_mem_ba (HPS_DDR3_BA), .hps_0_ddr_mem_ck (HPS_DDR3_CK_P), .hps_0_ddr_mem_ck_n (HPS_DDR3_CK_N), .hps_0_ddr_mem_cke (HPS_DDR3_CKE), .hps_0_ddr_mem_cs_n (HPS_DDR3_CS_N), .hps_0_ddr_mem_ras_n (HPS_DDR3_RAS_N), .hps_0_ddr_mem_cas_n (HPS_DDR3_CAS_N), .hps_0_ddr_mem_we_n (HPS_DDR3_WE_N), .hps_0_ddr_mem_reset_n (HPS_DDR3_RESET_N), .hps_0_ddr_mem_dq (HPS_DDR3_DQ), .hps_0_ddr_mem_dqs (HPS_DDR3_DQS_P), .hps_0_ddr_mem_dqs_n (HPS_DDR3_DQS_N), .hps_0_ddr_mem_odt (HPS_DDR3_ODT), .hps_0_ddr_mem_dm (HPS_DDR3_DM), .hps_0_ddr_oct_rzqin (HPS_DDR3_RZQ), // HPS Peripherals (Ethernet, USB, etc.) .hps_0_hps_io_hps_io_emac1_inst_TX_CLK (HPS_ENET_GTX_CLK), .hps_0_hps_io_hps_io_emac1_inst_TXD0 (HPS_ENET_TX_DATA[0]), .hps_0_hps_io_hps_io_emac1_inst_TXD1 (HPS_ENET_TX_DATA[1]), .hps_0_hps_io_hps_io_emac1_inst_TXD2 (HPS_ENET_TX_DATA[2]), .hps_0_hps_io_hps_io_emac1_inst_TXD3 (HPS_ENET_TX_DATA[3]), .hps_0_hps_io_hps_io_emac1_inst_RXD0 (HPS_ENET_RX_DATA[0]), .hps_0_hps_io_hps_io_emac1_inst_MDIO (HPS_ENET_MDIO), .hps_0_hps_io_hps_io_emac1_inst_MDC (HPS_ENET_MDC), .hps_0_hps_io_hps_io_emac1_inst_RX_CTL (HPS_ENET_RX_DV), .hps_0_hps_io_hps_io_emac1_inst_TX_CTL (HPS_ENET_TX_EN), .hps_0_hps_io_hps_io_emac1_inst_RX_CLK (HPS_ENET_RX_CLK), .hps_0_hps_io_hps_io_emac1_inst_RXD1 (HPS_ENET_RX_DATA[1]), .hps_0_hps_io_hps_io_emac1_inst_RXD2 (HPS_ENET_RX_DATA[2]), .hps_0_hps_io_hps_io_emac1_inst_RXD3 (HPS_ENET_RX_DATA[3]), .hps_0_hps_io_hps_io_qspi_inst_IO0 (HPS_FLASH_DATA[0]), .hps_0_hps_io_hps_io_qspi_inst_IO1 (HPS_FLASH_DATA[1]), .hps_0_hps_io_hps_io_qspi_inst_IO2 (HPS_FLASH_DATA[2]), .hps_0_hps_io_hps_io_qspi_inst_IO3 (HPS_FLASH_DATA[3]), .hps_0_hps_io_hps_io_qspi_inst_SS0 (HPS_FLASH_NCSO), .hps_0_hps_io_hps_io_qspi_inst_CLK (HPS_FLASH_DCLK), .hps_0_hps_io_hps_io_sdio_inst_CMD (HPS_SD_CMD), .hps_0_hps_io_hps_io_sdio_inst_D0 (HPS_SD_DATA[0]), .hps_0_hps_io_hps_io_sdio_inst_D1 (HPS_SD_DATA[1]), .hps_0_hps_io_hps_io_sdio_inst_CLK (HPS_SD_CLK), .hps_0_hps_io_hps_io_sdio_inst_D2 (HPS_SD_DATA[2]), .hps_0_hps_io_hps_io_sdio_inst_D3 (HPS_SD_DATA[3]), .hps_0_hps_io_hps_io_usb1_inst_D0 (HPS_USB_DATA[0]), .hps_0_hps_io_hps_io_usb1_inst_D1 (HPS_USB_DATA[1]), .hps_0_hps_io_hps_io_usb1_inst_D2 (HPS_USB_DATA[2]), .hps_0_hps_io_hps_io_usb1_inst_D3 (HPS_USB_DATA[3]), .hps_0_hps_io_hps_io_usb1_inst_D4 (HPS_USB_DATA[4]), .hps_0_hps_io_hps_io_usb1_inst_D5 (HPS_USB_DATA[5]), .hps_0_hps_io_hps_io_usb1_inst_D6 (HPS_USB_DATA[6]), .hps_0_hps_io_hps_io_usb1_inst_D7 (HPS_USB_DATA[7]), .hps_0_hps_io_hps_io_usb1_inst_CLK (HPS_USB_CLKOUT), .hps_0_hps_io_hps_io_usb1_inst_STP (HPS_USB_STP), .hps_0_hps_io_hps_io_usb1_inst_DIR (HPS_USB_DIR), .hps_0_hps_io_hps_io_usb1_inst_NXT (HPS_USB_NXT), .hps_0_hps_io_hps_io_uart0_inst_RX (HPS_UART_RX), .hps_0_hps_io_hps_io_uart0_inst_TX (HPS_UART_TX) ); multi_tile_solver solver_inst ( .clock (CLOCK_50), .reset (KEY[0]), .in_data (fifo_data), .in_valid (fifo_valid), .in_end_of_stream (fifo_eop), .in_ready (fifo_ready), .out_data (mm_writedata), .out_addr (mm_address), // The solver keeps its full 32-bit view .out_write_en (mm_write), .out_ack (solver_ack) ); endmodule // end top level The Platform Designer auto-generated Comuter_System.v has the ports list: module Computer_System ( output wire fifo_0_out_valid, // fifo_0_out.valid output wire [31:0] fifo_0_out_data, // .data output wire [7:0] fifo_0_out_channel, // .channel output wire [7:0] fifo_0_out_error, // .error output wire fifo_0_out_startofpacket, // .startofpacket output wire fifo_0_out_endofpacket, // .endofpacket output wire [1:0] fifo_0_out_empty, // .empty input wire fifo_0_out_ready, // .ready output wire [12:0] hps_0_ddr_mem_a, // hps_0_ddr.mem_a output wire [2:0] hps_0_ddr_mem_ba, // .mem_ba output wire hps_0_ddr_mem_ck, // .mem_ck output wire hps_0_ddr_mem_ck_n, // .mem_ck_n output wire hps_0_ddr_mem_cke, // .mem_cke output wire hps_0_ddr_mem_cs_n, // .mem_cs_n output wire hps_0_ddr_mem_ras_n, // .mem_ras_n output wire hps_0_ddr_mem_cas_n, // .mem_cas_n output wire hps_0_ddr_mem_we_n, // .mem_we_n output wire hps_0_ddr_mem_reset_n, // .mem_reset_n inout wire [7:0] hps_0_ddr_mem_dq, // .mem_dq inout wire hps_0_ddr_mem_dqs, // .mem_dqs inout wire hps_0_ddr_mem_dqs_n, // .mem_dqs_n output wire hps_0_ddr_mem_odt, // .mem_odt output wire hps_0_ddr_mem_dm, // .mem_dm input wire hps_0_ddr_oct_rzqin, // .oct_rzqin output wire hps_0_hps_io_hps_io_emac1_inst_TX_CLK, // hps_0_hps_io.hps_io_emac1_inst_TX_CLK output wire hps_0_hps_io_hps_io_emac1_inst_TXD0, // .hps_io_emac1_inst_TXD0 output wire hps_0_hps_io_hps_io_emac1_inst_TXD1, // .hps_io_emac1_inst_TXD1 output wire hps_0_hps_io_hps_io_emac1_inst_TXD2, // .hps_io_emac1_inst_TXD2 output wire hps_0_hps_io_hps_io_emac1_inst_TXD3, // .hps_io_emac1_inst_TXD3 input wire hps_0_hps_io_hps_io_emac1_inst_RXD0, // .hps_io_emac1_inst_RXD0 inout wire hps_0_hps_io_hps_io_emac1_inst_MDIO, // .hps_io_emac1_inst_MDIO output wire hps_0_hps_io_hps_io_emac1_inst_MDC, // .hps_io_emac1_inst_MDC input wire hps_0_hps_io_hps_io_emac1_inst_RX_CTL, // .hps_io_emac1_inst_RX_CTL output wire hps_0_hps_io_hps_io_emac1_inst_TX_CTL, // .hps_io_emac1_inst_TX_CTL input wire hps_0_hps_io_hps_io_emac1_inst_RX_CLK, // .hps_io_emac1_inst_RX_CLK input wire hps_0_hps_io_hps_io_emac1_inst_RXD1, // .hps_io_emac1_inst_RXD1 input wire hps_0_hps_io_hps_io_emac1_inst_RXD2, // .hps_io_emac1_inst_RXD2 input wire hps_0_hps_io_hps_io_emac1_inst_RXD3, // .hps_io_emac1_inst_RXD3 inout wire hps_0_hps_io_hps_io_qspi_inst_IO0, // .hps_io_qspi_inst_IO0 inout wire hps_0_hps_io_hps_io_qspi_inst_IO1, // .hps_io_qspi_inst_IO1 inout wire hps_0_hps_io_hps_io_qspi_inst_IO2, // .hps_io_qspi_inst_IO2 inout wire hps_0_hps_io_hps_io_qspi_inst_IO3, // .hps_io_qspi_inst_IO3 output wire hps_0_hps_io_hps_io_qspi_inst_SS0, // .hps_io_qspi_inst_SS0 output wire hps_0_hps_io_hps_io_qspi_inst_CLK, // .hps_io_qspi_inst_CLK inout wire hps_0_hps_io_hps_io_sdio_inst_CMD, // .hps_io_sdio_inst_CMD inout wire hps_0_hps_io_hps_io_sdio_inst_D0, // .hps_io_sdio_inst_D0 inout wire hps_0_hps_io_hps_io_sdio_inst_D1, // .hps_io_sdio_inst_D1 output wire hps_0_hps_io_hps_io_sdio_inst_CLK, // .hps_io_sdio_inst_CLK inout wire hps_0_hps_io_hps_io_sdio_inst_D2, // .hps_io_sdio_inst_D2 inout wire hps_0_hps_io_hps_io_sdio_inst_D3, // .hps_io_sdio_inst_D3 inout wire hps_0_hps_io_hps_io_usb1_inst_D0, // .hps_io_usb1_inst_D0 inout wire hps_0_hps_io_hps_io_usb1_inst_D1, // .hps_io_usb1_inst_D1 inout wire hps_0_hps_io_hps_io_usb1_inst_D2, // .hps_io_usb1_inst_D2 inout wire hps_0_hps_io_hps_io_usb1_inst_D3, // .hps_io_usb1_inst_D3 inout wire hps_0_hps_io_hps_io_usb1_inst_D4, // .hps_io_usb1_inst_D4 inout wire hps_0_hps_io_hps_io_usb1_inst_D5, // .hps_io_usb1_inst_D5 inout wire hps_0_hps_io_hps_io_usb1_inst_D6, // .hps_io_usb1_inst_D6 inout wire hps_0_hps_io_hps_io_usb1_inst_D7, // .hps_io_usb1_inst_D7 input wire hps_0_hps_io_hps_io_usb1_inst_CLK, // .hps_io_usb1_inst_CLK output wire hps_0_hps_io_hps_io_usb1_inst_STP, // .hps_io_usb1_inst_STP input wire hps_0_hps_io_hps_io_usb1_inst_DIR, // .hps_io_usb1_inst_DIR input wire hps_0_hps_io_hps_io_usb1_inst_NXT, // .hps_io_usb1_inst_NXT input wire hps_0_hps_io_hps_io_uart0_inst_RX, // .hps_io_uart0_inst_RX output wire hps_0_hps_io_hps_io_uart0_inst_TX, // .hps_io_uart0_inst_TX output wire mm_bridge_0_s0_waitrequest, // mm_bridge_0_s0.waitrequest output wire [15:0] mm_bridge_0_s0_readdata, // .readdata output wire mm_bridge_0_s0_readdatavalid, // .readdatavalid input wire [0:0] mm_bridge_0_s0_burstcount, // .burstcount input wire [15:0] mm_bridge_0_s0_writedata, // .writedata input wire [31:0] mm_bridge_0_s0_address, // .address input wire mm_bridge_0_s0_write, // .write input wire mm_bridge_0_s0_read, // .read input wire [1:0] mm_bridge_0_s0_byteenable, // .byteenable input wire mm_bridge_0_s0_debugaccess, // .debugaccess output wire [12:0] new_sdram_controller_0_wire_addr, // new_sdram_controller_0_wire.addr output wire [1:0] new_sdram_controller_0_wire_ba, // .ba output wire new_sdram_controller_0_wire_cas_n, // .cas_n output wire new_sdram_controller_0_wire_cke, // .cke output wire new_sdram_controller_0_wire_cs_n, // .cs_n inout wire [15:0] new_sdram_controller_0_wire_dq, // .dq output wire [1:0] new_sdram_controller_0_wire_dqm, // .dqm output wire new_sdram_controller_0_wire_ras_n, // .ras_n output wire new_sdram_controller_0_wire_we_n, // .we_n input wire sys_sdram_pll_0_ref_clk_clk, // sys_sdram_pll_0_ref_clk.clk input wire sys_sdram_pll_0_ref_reset_reset, // sys_sdram_pll_0_ref_reset.reset output wire sys_sdram_pll_0_sdram_clk_clk // sys_sdram_pll_0_sdram_clk.clk ); ... What is causing this failure?4Views0likes0CommentsHow-to generate dual-port (read/write) RAM with clock enables
Following Stratix® 10 Embedded Memory User Guide (2025.07.24) chapter 2.11.6 independent clock enables are supported for read/write clock mode input/output clock mode I start a "RAM 2-port" IP generation. I select "one read/write port" in the general tab and "dual clock use separate read and write clock" in the "Clks/Rd,Byte En" tab. Now I enable the clock enables in the "Reg/Clkens/Aclrs" tab "use clock for read input register" as well as same for "output registers". IP Parameter window immediuately shows an error: Error: testram.ram_2port_0: Clock enable for read input registers is unavailable while using 'Dual clock: use separate read and write clocks' for Stratix 10 device family. This is verified with Quartus Pro 18.1 and 25.3. Is this a bug of the software or the documentation?125Views0likes13CommentsRegarding Quartus Prime License Activation for the Agilex 5 Evaluation Kit
Does the Agilex 5 Premium Development Kit include a one‑year paid Quartus Prime license? The product brief states that it is included, but I would like to confirm. https://docs.altera.com/v/u/docs/815177/agilextm-5-fpga-e-series-065b-premium-fpga-development-kit-product-brief If the license is included: ・Is the same one‑year license also provided with the Modular Development Kit? ・Does the bundled license also include the IP Base Suite, as with a standard paid Quartus Prime license?11Views0likes1CommentCompatibility and Configuration of Arria10 Devices with and without HPS
Hi, I have a question regarding the compatibility between Arria10 devices with and without HPS. The pins of both variants are completely identical, and the SoC can also be operated as an FPGA without HPS. As far as I know, both variants consist of only one die. Are these identical, and is the SoC simply deactivated in the FPGA? Here’s the background: Can and may the SoC be configured as if it were only an FPGA device, without using the HPS pins? We conducted a test where we used an SoC in a setup where an FPGA was supposed to be installed. The device successfully booted from a configuration stored in a QSPI NAND-Flash and operated as expected. The pins were also accessible from the FPGA. Over JTAG, the device correctly identified itself as a 10AS016 instead of a 10AX016. Currently, we cannot identify any differences. Could you provide further information on whether there is anything else to consider? The core power supply of the HPS is connected to GND, and the FPGA core supply is 0.9V. Thank you in advance and best regards.8Views0likes1CommentPCIe Enumeration Failure for CXL IP
When attempting to validate the Agilex 7 R-Tile Compute Express Link (CXL) 1.1/2.0 IP (Type 2 and Type 3) using a CXL compatible host server, the host server is unable to complete PCIe bus enumeration. The host server stalls while attempting to complete PCIe bus enumeration. The stall never resolves after boot, and access to to the host is never granted. Depiction of the stall and its status code from the host server's perspective is provided as an attached PNG file titled: "pcie_enumeration_stall". Debugging Information: A PCIe Gen 5.0 reference design using the Altera R-Tile Avalon Streaming IP For PCI Express was used to validate that PCIe enumeration could complete fully without failure, and that the host server could exchange data with the FPGA. While running the CXL example design, the Quartus System Console's Link Logger indicates that the LTSSM state is in the "UP_L0" before the PCIe bus enumeration stall. The state may sometimes change when attempting to "Refresh" the status during the PCIe bus enumeration stall. The state may briefly enter recovery (UP_L0 -> REC_IDLE -> REC_RCVRCFG -> REC_RCSVLOCK -> REC_COMPLETE -> UP_L0). Depiction of the Quartus System Console's Link Logger when this occurs is provided as an attached PNG file titled: "ltssm_link_logger". While running the CXL example design, the Quartus System Console's Link Logger indicates that the advertised and negotiated link speeds and widths are both 32.0 GT and x16. Depiction of a CXL Type 3 Quartus System Console's Overview is provided as an attached PNG file titled: "cxl_ip_systemconsole_overview". Instead of generating the example design, the pre-compiled binary offered by Altera for Type 2 and Type 3 CXL IP designs was used and resulted in the exact same failures as shown above. CXL.mem transaction registers (M2S and S2M) are 0x00, indicating that the host server never progresses far enough to begin sending/receiving transactions/requests. Between the PCIe build that functions and the CXL build that does not function (stalls at enumeration), no server UEFI settings were changed. A CXL enable function was enabled for all tests. Several PCIe UEFI settings were changed in an attempt to resolve the enumeration stall, but none changed the outcome. Attempting to disable the CXL Compliance 2.0 and the HDM decoder registers both did not resolve the issue. The FPGA was powered and programmed before the server was launched. Two different CXL servers were tested and both resulted in the same behavior. The relevant PCIe and CXL settings from BIOS is provided as an attached PNG file titled: "cxl_server_settings". The CXL REFCLK was tested as both COMMON and SRIS/SRNS. Each test changed SW3 to use relevant onboard and connected based clocks. IP Settings: CXL IP settings are uploaded as PNG files titled: "cxl_ip_settings_N". The settings tested are the default provided settings as well as a version with a 300 MHz PLD clock (SRIS). Hardware Details: FPGA is connected to host server via PCIe Gen 5.0 x16 slot on Tile 14C. FPGA device is the Altera Agilex 7 FPGA I-Series Development Kit (Production 2x R-Tile & 1x F-Tile) (AGIB027R29A1E1VB) The DIMM provided with the development kit is slotted into DIMM Slot A. SW1 is set to 1000 (PCIe PRSNT x16). SW3 is set to 0110 for designs using the CXL/PCIe common clock and 0000 for designs using the CXL/PCIe onboard REFCLK (SRIS). Software Details: Quartus Prime Pro Edition v25.1 was used to generate the designs. R-Tile Altera FPGA IP for Compute Express Link (CXL) was generated with version 1.17.0. FPGA Design: The FPGA design is generated using the example design with the IP settings given above. A pre-compiled binary provided by Altera was also used to test instead of a generated design. Server details: SMC AS-1126HS-TN (CXL 2.0 via 4x PCIe gen5 x16 slots) CPU: 2x AMD EPYC 9135 (CXL 2.0) RAM: 4x Micron 64GB @ 6000 MT/s UEFI: AMI 1.7a 10/30/2025 Attachments: The system console debug register outputs are saved to CSV files attached to this post. These CSV files are taken from a CXL Type 3 reference design with PLD REFCLK at 300 MHz (SRIS). Questions: Can you provide guidance on how to obtain more information on the enumeration status other than the LTSSM register? Can you provide the UEFI/BIOS settings for PCIe/CXL that was used to test this IP as reference? Could the configuration space registers (DVSEC/HDM) or the TLP handling implemented in the CXL example design RTL create this PCIe enumeration failure? Can you provide guidance on what debug/status registers the CXL IP provides that could be relevant to this issue?41Views0likes2CommentsLinux not booting - can't get kernel image
Hi, I'm having trouble booting to Linux after migrating a project to the newest GSRD 2.0 (Quartus 25.3). I'm using an Agilex 5 FPGA E-Series 065B Premium Devkit. The project was based in the GSRD for Quartus 25.1 (QPDS25.1_REL_GSRD_PR) and had a few modifications, working in version 25.1 with the default device-tree. I'm guessing this might be something related to differences in the device-tree between GRSD 2.0 and the previous version ? I've tried looking around but there's so many .dts and .dtsi files that I'm a bit lost. Any advice appreciated.134Views0likes6CommentsFailing IO buffer
A very simple desiggn to trap failure. Using an IO buffer (8 off) I have proved that the input from an EEPROM is read corrcly but the recieving instance's register records X"FF". I cannot see why. Any help would be appreciated because it is driving me nuts.251Views0likes24CommentsStratix 10 FPGA Dev Kit VCCIO_FMC voltage issue
The FMC VCC IO voltage level is adjustable using a resistor on the board as shown below. The default is 1.8V and that works fine. When I depopulate the resistor (R468) to get 1.2V, the output voltage goes to 0V and the enable line for the DC-DC converter also goes low. Any idea what the reason for this is? And what is the fix?58Views0likes6CommentsError:invalid command name "Quartus"
I am running Quartus GUI on a Linux server remotely through an SSH session. One of the below two issues happens 50% of the time after starting compilation Quartus hangs during a stage of compilation showing as if it is working (timer is increasing) but nothing is happening and no substantial CPU usages is observed. Trying to stop compilation fails and attempting to close the project fails giving the below message: Quartus fails compilation and gives below tcl message o Error:invalid command name "Quartus" o Error: while executing o Error:"unknown_original Quartus 0x202ca: (ld-linux-x86-64)" o Error: ("eval" body line 1) o Error: invoked from within o Error:"eval unknown_original $cmd $args" o Error: (procedure "::unknown" line 7) o Error: invoked from within o Error:"Quartus 0x202ca: (ld-linux-x86-64)" o Error: invoked from within o Error:"flng::run_flow_command -flow "compile" -end "dni_tlg" -resume" After closing or killing Quartus, opening it again fails and the terminal shows below message: Error (22912): Unhandled exception: Fatal Error: Assertion failed tools/cpp/ddm/ddm_assessor.cpp:53: DDM_T::verify_token(token) : Cannot identify the client from function assertion_error in tools/cpp/ddm_report/ddm_report_msg.cpp@465 *** Fatal Error: Program termination requested *** *** Below is the stack trace at the time the error occurred. *** The lines beginning "Err Handler" represent frames relating *** to generating this report. *** The point at which the error occurred is somewhere after these lines. *** There may be a few frames representing standard/library code *** before the Quartus frames begin. *** The search for the error should begin with the Quartus frames. *** Unwinder: libunwind *** Stack depth: 15 Quartus 0x23dd9: err_terminator() + 0x1bc (ccl_err) Quartus 0xb036a: __cxxabiv1::__terminate(void (*)()) + 0xa (stdc++) Quartus 0xb03d5: (stdc++) Quartus 0xb0628: (stdc++) Quartus 0x1684d: void ddm_throw<DDM_RUNTIME_ERROR>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x26d (ddm_report) Quartus 0x13f3e: DDM_REPORT::DDM_ASSERTION_HANDLER::assertion_error(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) const + 0xde (ddm_report) Quartus 0x129e2: DDM_REPORT::ASSERTION_HANDLER::error(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x72 (ddm_report) Quartus 0x13df4: DDM_REPORT::detail::assert_at_line(char const*, char const*, int, char const*, ...) + 0x1b4 (ddm_report) Quartus 0x1debb0: ddm_set_lassessor(DDM_T_ASSESSOR*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x60 (ddm) Quartus 0xeedc7: DMS_MANAGER::DMS_MANAGER() + 0x21f (dni_dms) Quartus 0xeef48: DMS_MANAGER::get() + 0x7a (dni_dms) Quartus 0xf163b: _GLOBAL__sub_I_dms_manager.cpp + 0x58 (dni_dms) Quartus 0x647e: (ld-linux-x86-64) Quartus 0x6568: (ld-linux-x86-64) Quartus 0x202ca: (ld-linux-x86-64) Checking running processes shows no Quartus related processes at this stage How can we solve this issue please?18Views0likes1Comment
Featured Places
Community Resources
Check out the support articles on personalizing your community account, contributing to the community, and providing community feedback directly to the admin team!Tags
- troubleshooting10,246 Topics
- fpga dev tools quartus® prime software pro4,128 Topics
- FPGA Dev Tools Quartus II Software3,159 Topics
- stratix® 10 fpgas and socs1,508 Topics
- agilex™ 7 fpgas and socs1,360 Topics
- arria® 10 fpgas and socs1,331 Topics
- stratix® v fpgas1,306 Topics
- arria® v fpgas and socs1,218 Topics
- cyclone® v fpgas and socs1,046 Topics
- Configuration919 Topics
Recent Blogs
Using FPGAs and MCUs Collaboratively FPGAs and microcontrollers can be used alternatively in some applications, but they can also be used cooperatively. FPGAs provide ultimate flexibility, but microcontrollers often include peripherals like USB or wireless interfaces that may be more convenient for communications and updates. Both devices require supporting circuitry such as power, reference clocks, and storage. Fortunately, these can often be shared when using FPGAs and microcontrollers together. This blog introduces an open-source tool that enables microcontrollers to load a programming file into a programmable device, and the practical application of this with the Raspberry Pi RP2350 MCU. An Open Standard for Loading Programmable Devices Loading programmable devices from embedded processors is a common task. The Jam Standard Test and Programming Language (STAPL) was originally developed by Altera engineers to address challenges in programming programmable logic devices (PLDs) in-system, such as proprietary file formats, vendor-specific algorithms, large file sizes, and long programming times. It provides a software-level standard for in-system programming (ISP), enabling flexibility and platform independence. Figure 1. In-system programming using the Jam File & Jam Player via an embedded processor. In August 1999, JAM/STAPL was adopted as JEDEC standard JESD-71, making it an industry-recognized solution for JTAG-based programming. The language introduced features like compact file formats, branching, and looping, which reduced programming time and file size—ideal for embedded systems. JAM/STAPL consists of two main components: Jam Composer: Generates Jam Files (.jam) containing programming algorithms and user data. Jam Player: Interprets these files and applies JTAG vectors for programming and testing devices. Over time, JAM/STAPL gained widespread support from PLD vendors, programming equipment makers, and test equipment manufacturers, becoming a cornerstone for in-field upgrades, prototyping, and production programming. Its evolution also included a byte-code format (.jbc) for even smaller files, making it suitable for resource-constrained embedded processors. Recently, Altera updated the license terms of the JAM and JBC players source code to MIT-0, to better clarify the usage rights. A Practical Example The CycloMod board is an example of an FPGA and microcontroller working cooperatively. The board combines a Raspberry Pi RP2350 MCU with a Cyclone® 10 LP FPGA in the SparkFun MicroMod form factor. In this board, the FPGA is connected to some of the edge connector I/O, while the RP2350 is used to provide a flexible USB interface. The boot ROM in the RP2350 is leveraged extensively for firmware and FPGA image updates. Figure 2. CycloMod Board At 22mm x 22mm (including the card-edge connector), the MicroMod form factor is quite compact. This necessitates sharing resources, as there is not much room for multiple oscillators or flash devices. The 12 MHz crystal oscillator in the RP2350 is easily shared by routing it to one of the GPIO clock outputs. Both the Cyclone 10 LP device and RP2350 rely on external storage, but this can also be shared. On this board, the flash is connected to the RP2350 to take advantage of the UF2 loading provided in the boot ROM, and the RP2350 loads the Cyclone FPGA. The Cyclone 10 LP device supports active configuration with an external SPI flash device, but it can also be configured/programmed passively through JTAG. Figure 3. CycloMod Block Diagram The STAPL byte code format (sometimes referred to as JBC) is compact enough to be used with microcontrollers like the RP2350. Altera provides source code for implementing the “players” to process these files in embedded systems. They offer players for the ASCII (JAM) and bytecode (JBC) versions of the files. Altera’s Quartus® software provides the option to generate JAM and JBC files. Since STAPL is a JEDEC standard, other FPGA vendors also support generating these files. Using the open-source code provided by Altera, the RP2350 is able to read a JBC file from flash and load the Cyclone 10 LP FPGA through the JTAG interface. A Python script is provided to convert the JBC files to the UF2 format, which the RP2350 uses for drag-n-drop programming. The script also adds a header with the file length and other details. Thanks to the ingenuity of the UF2 format created by Microsoft, this enables cross platform field updates with zero software to install. Results and Link to Source Porting Altera’s JBC player to the RP2350 eliminated the need for a second flash device and enabled user-friendly drag-n-drop FPGA updates. The port is available on GitHub if you want to use this in your system. https://github.com/steieio/pico-jbc
30 days ago0likes
The expanded Agilex™ 5 D-Series FPGA and SoC family delivers a big leap in capabilities for mid-range FPGA applications, offering up to 2.5× more logic, memory, DSP/AI compute, and up to 2× external memory bandwidth. These enhancements make it ideal for designs that demand high compute performance in power and space-constrained environments.
1 month ago1like
We’re gearing up for AOC 2025! From December 9–11, we’ll be at the Gaylord National Resort & Convention Center in National Harbor, Maryland for AOC2025—one of North America’s premier events dedicated to electronic warfare and radar. Visit us at booth #505 to discover the latest innovations in our Agilex™ 9 Direct RF and Agilex™ 5 product families. What to Expect at Altera’s Booth #505: 1. Wideband and Agility Demo using Agilex 9: Overview: Discover the power of frequency hopping with Altera’s Direct RF FPGA, enhancing system resilience and adaptability. Key Features: Demonstrates swift frequency changes and wideband monitoring. 2. Wideband Channelizer Demo using Agilex 9: Overview: Wideband Channelizer features polyphase filter and 65 phases FFT blocks with variable channel support. Key Features: Demonstrates sampling rate that supports 64 GSPS with 32GHz instantaneous bandwidth. 3. Direction of Arrival Demo using Agilex 5: Overview: Explore Direction of Arriaval estimation and signal detection using AI-based approach with deployment of neural networks. Key Features: Demonstrates neural networks implementation using DSP Builder Advanced Blockset (DSPBA), showcasing end-to-end operation running real time inference. 4. Altera COTS Partner Showcase: Come see our Agilex based COTS boards from partners including Annapolis Microsystems, CAES, Hitek, iWave Global, Mercury Systems, & Spectrum Controls. We are hosting customer meetings at the event, contact your local Altera salesperson to schedule a slot.
1 month ago0likes
5 MIN READ
The computing world is hitting a wall. As AI models grow to trillions of parameters, as in-line databases scale to massive sizes, and as high-performance computing (HPC) workloads push bandwidth and memory to their limits, the need for more efficient data movement has never been greater. Traditional approaches to scaling bandwidth and capacity can’t keep pace without unsustainable cost expenditures on power usage and infrastructure build-out. Compression offers a practical and elegant solution to this challenge. By reducing the size of data that moves across interconnects, we can stretch bandwidth, improve memory efficiency, and lower system power—all without requiring a fundamental re-architecture. The Open Compute Project (OCP) has recently recognized this reality, highlighting compression as a key enabler for modern workloads. The combination of ZeroPoint Technologies (an Altera Partner), advanced compression IP, and Altera’s CXL Type 3 IP and FPGAs results in a 2–3x increase in bandwidth, giving the industry a proven path to meet the growing demand head-on. The Problem: Data Bottlenecks in Today’s Workloads AI and LLMs Large language models are exploding in size—parameters have grown from millions to billions, and now to trillions, in just a few short years. Training and inference of these models are fundamentally constrained by memory bandwidth and capacity. Without compression, these models would require even larger amounts of data movement, which increases latency, power consumption, and cost. In-line Databases Databases are increasingly run in-line with applications, from analytics pipelines to transaction processing. These in-line databases demand high throughput and low-latency access to massive datasets. Without compression, systems are forced to overprovision bandwidth and memory resources, dramatically increasing the total cost of ownership (TCO). High-Performance Computing (HPC) From climate modeling to genomics, HPC workloads require immense amounts of parallel data movement. Without compression, HPC centers must continue scaling raw interconnect bandwidth, which is unsustainable in terms of energy and cost at exascale levels. CXL Expansion (CXL Device Type 3) CXL (Compute Express Link) has emerged as the industry-standard protocol for memory pooling and expansion. Yet, as more systems adopt CXL for disaggregated memory, the sheer volume of data moving across CXL links risks overwhelming interconnect bandwidth. Without compression, the benefits of CXL expansion hit a hard ceiling. Demo Video: ZeroPoint demonstrates 2–3x increased bandwidth using its CXL compressed memory tier solution at the Future of Memory and Storage (FMS) 2025 CXL Acceleration (CXL Device Type 2) Beyond memory expansion, CXL enables accelerators to share memory seamlessly with CPUs. But in accelerator-heavy environments, data transfer volumes explode. Lack of compression makes accelerator scaling inefficient, power-hungry, and cost-prohibitive. Contact Altera to see the demo video: 2x–6x higher QPS running a VectorDB workload using a CXL 2.0 interface. Without compression, every one of these workloads faces a bottleneck that would be extremely difficult to solve with hardware scaling alone. OCP Introduces Compression into its Specification The Open Compute Project (OCP) organization recently underscored the importance of compression by including it in its specifications. This is a landmark shift: compression is no longer viewed as optional but included as a supported feature for next-generation compute infrastructure. James Kelly, VP Market Intelligence and Innovation at the OCP Foundation, said: “Within the OCP Community, our Composable Memory Systems Project, leveraging CXL and compression technologies, is driving the development of interoperable, scalable memory architectures that empower AI workloads with unprecedented efficiency and flexibility. By enabling disaggregated memory resources to be pooled and allocated across heterogeneous systems, we’re directly supporting OCP’s Open System for AI strategic initiative, fostering open specifications and standards that accelerate innovation and accessibility in AI infrastructure.” Klas Moreau, CEO of ZeroPoint Technologies, added: “What excites us about working with Altera’s CXL Type 3 IP is not just its performance, but its flexibility. Unlike other FPGA providers, Altera’s CXL solution gives us the low-latency, high-bandwidth fabric we need to showcase the full potential of our compression IP. Together, we’re able to deliver measurable gains—up to a 2–3x effective bandwidth increase—without changing the underlying hardware footprint. That’s a game-changer for customers scaling AI, HPC, and database workloads.” The Solution: ZeroPoint Compression IP + Altera CXL Type 3 IP and FPGA-based Boards ZeroPoint Compression Technology ZeroPoint brings a powerful, low-latency, hardware-efficient compression engine designed specifically for memory and interconnect applications. Unlike general-purpose compression algorithms, ZeroPoint’s IP is optimized for inline operation at wire speed, ensuring data is compressed and decompressed seamlessly without introducing overhead. Key benefits include: High compression ratios across AI, HPC, and database workloads Ultra-low latency to avoid bottlenecks on memory paths Energy savings by reducing data movement requirements Proven scalability across CXL and memory expansion use cases Altera CXL Type 3 IP Altera’s CXL Type 3 IP provides the foundation for memory expansion and pooling. It enables compute nodes to access disaggregated memory resources efficiently and securely. By integrating ZeroPoint’s compression IP, Altera’s solution extends even further—allowing CXL links to move more effective bandwidth, reduce congestion, and scale system capacity without increasing physical resources. There is a wide variety of CXL-capable FPGA-based boards available from Altera or partners. Together: Meeting the Market Need When combined, ZeroPoint’s compression IP and Altera’s CXL Type 3 IP address the OCP-driven specification requirements and solve the core problem facing data-intensive applications, ranging from AI to databases: moving massive amounts of data efficiently. Benefits to customers include: More bandwidth without more lanes: Compression effectively multiplies CXL throughput. Boost performance, cut costs: Unleash untapped performance in your current infrastructure with minimal new investment. Future-proof compliance: Alignment with OCP specifications ensures long-term viability. This combination delivers not just a technology improvement, but a market-ready solution that meets both current and emerging requirements. Conclusion The computing industry is shifting to adjust to new demands. AI, HPC, databases, and disaggregated systems are demanding exponential growth in bandwidth and memory efficiency—growth that hardware scaling alone cannot deliver. One answer is compression. OCP’s inclusion of compression in its specifications validates this direction and creates a mandate for solutions that integrate compression seamlessly with interconnect technologies like CXL. Through the combination of ZeroPoint’s cutting-edge compression IP and Altera’s CXL Type 3 IP, customers can now confidently deploy systems that are not only faster and more efficient but also aligned with the industry’s forward-looking standards. The future of computing depends on smarter ways to move and manage data. Compression + CXL is that smarter way—and with ZeroPoint and Altera, the future is already here. Learn More Presentations or videos are available for on-demand viewing or download: FMS 2025 session (video | slides) OCP 2025 session (video | slides) Next Steps Learn more about Altera’s CXL IP core. For technical details, partnership discussions, or general inquiries, please contact: nilesh.shah@zptcorp.com — CXL compression solutions phillip.swart@altera.com — FPGA-based CXL IP and boards
2 months ago0likes
4 MIN READ
Availability of Quartus Prime Pro Edition 25.3 & the simultaneous release of FPGA AI Suite 25.3 marks a major leap forward in FPGA design productivity. This release delivers smarter tools, deeper insights, and faster compiles, achieving a 6% compile time improvement over 25.1, a 27% reduction since Agilex 7 transitioned to production, as well as improved AI tool ease of use.
2 months ago1like