Server

Use cases benefiting from the optimization of small networking data packets using Intel® DPDK an open source solution

Intel® Data Plane Development Kit (Intel® DPDK) is a set of optimized data plane software libraries and drivers that can be used to accelerate packet processing on Intel® architecture. The performance of Intel DPDK scales with improvements in processor technology from Intel® Atom™ to Intel® Xeon® processors. In April 2013 6WIND established dpdk.org an Open Source Project where Intel DPDK is offered under the open source BSD* license.

  • Entwickler
  • Partner
  • Networking
  • Server
  • Fortgeschrittene
  • Intel® DPDK
  • Intel® Xeon® processors
  • SDN
  • NFV
  • Open vSwitch*
  • Intel® QuickAssist Technology
  • Networking
  • Große Datenmengen
  • Open Source
  • AES-GCM Encryption Performance on Intel® Xeon® E5 v3 Processors

    This case study examines the architectural improvements made to the Intel® Xeon® E5 v3 processor family in order to improve the performance of the Galois/Counter Mode of AES block encryption. It looks at the impact of these improvements on the nginx* web server when backed by the OpenSSL* SSL/TLS library. With this new generation of Xeon processors, web servers can obtain significant increases in maximum throughput by switching from AES in CBC mode with HMAC+SHA1 digests to AES-GCM.

  • Entwickler
  • Linux*
  • Server
  • Fortgeschrittene
  • Haswell
  • AES
  • Intel® Xeon® E5 v3 Processors
  • AES-GCM
  • OpenSSL
  • Intel® AES-NI
  • Sicherheit
  • Videos - Parallel Programming with Intel Xeon Phi Coprocessors

    Here is a list of recently published videos from Colfax International on Intel(R) Xeon Phi(TM) Coprocessors.

    In this video we will discuss software tools needed and recommended for developing applications for Intel Xeon Phi coprocessors. We will begin with software that is necessary to boot coprocessors and to run pre-compiled executables on them.

    Videos - Parallel Programming and Optimization with Intel Xeon Phi Coprocessors

    Here is a set of introductory videos from Colfax International on Parallel Programming and Optimization with Intel(R) Xeon Phi(TM) Coprocessors.

    In this video episode we will introduce Intel Xeon Phi coprocessors based on the Intel Many Integrated Core, or MIC, architecture and will cover some of the specifics of hardware implementation.

    CentOS 7 + MPSS 3.4.x + OFED 3.1x: Bug in ibp_server?

    Hi,

    I'm currently in the process of setting up the OS for a diskless cluster with two Xeon Phi Cards per host.

    Currently working with CentOS 7.0, MPSS 3.4.3, OFED 3.12-1 and Lustre 2.7.0.

    Installation and booting host and two Xeon Phis works fine so far, except that as soon as I try load Lustre (using o2ib) on the second Xeon Phi the complete system crashes due to an error within the ibp_server module (logs can be found a. Using only one Xeon Phi lustre works fine, including mount over Infiniband.

    Regarding intel MIC offload error: buffer write failed

    I am trying to explore the code offloading construct .In the following program
     the offloaded region fetches the architecture of MIC card.
    #include<stdio.h>
    #include<omp.h>
    void main()
    {
      FILE *fp,*fp1;
     char data[100],data1[100],final[100];
    #pragma offload target(mic: 0) inout(data , fp)
    {
    	fp=popen("uname -m","r");
    	fread(data, sizeof(char),100 , fp);
    	fclose(fp);
    }
    	puts(data);
    }
    
    Here are three sample runs of this program:
    • The first run succeeds ,

    Can AVX instruction be executed in parallel

    Hi,

    Can two avx instrcutions can be executed in parallel?

    For example,

    Version1:

                a1= _mm256_load_ps((Rin +offset)); 
                a2= _mm256_load_ps((Gin +offset));  
                a3= _mm256_load_ps((Bin +offset));

                ac0 = _mm256_mul_ps(a1, in2outAvx_11); 
                ac1 = _mm256_mul_ps(a2, in2outAvx_12);
                ac2 = _mm256_mul_ps(a3, in2outAvx_13);
                
                z0 = _mm256_add_ps(ac0,ac1);
                z1 = _mm256_add_ps(z0, ac2);
                

    Server abonnieren