Browse Source

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking changes from David Miller:

 1) Multiply in netfilter IPVS can overflow when calculating destination
    weight.  From Simon Kirby.

 2) Use after free fixes in IPVS from Julian Anastasov.

 3) SFC driver bug fixes from Daniel Pieczko.

 4) Memory leak in pcan_usb_core failure paths, from Alexey Khoroshilov.

 5) Locking and encapsulation fixes to serial line CAN driver, from
    Andrew Naujoks.

 6) Duplex and VF handling fixes to bnx2x driver from Yaniv Rosner,
    Eilon Greenstein, and Ariel Elior.

 7) In lapb, if no other packets are outstanding, T1 timeouts actually
    stall things and no packet gets sent.  Fix from Josselin Costanzi.

 8) ICMP redirects should not make it to the socket error queues, from
    Duan Jiong.

 9) Fix bugs in skge DMA mapping error handling, from Nikulas Patocka.

10) Fix setting of VLAN priority field on via-rhine driver, from Roget
    Luethi.

11) Fix TX stalls and VLAN promisc programming in be2net driver from
    Ajit Khaparde.

12) Packet padding doesn't get handled correctly in new usbnet SG
    support code, from Ming Lei.

13) Fix races in netdevice teardown wrt.  network namespace closing.
    From Eric W.  Biederman.

14) Fix potential missed initialization of net_secret if not TCP
    connections are openned.  From Eric Dumazet.

15) Cinterion PLXX product ID in qmi_wwan driver is wrong, from
    Aleksander Morgado.

16) skb_cow_head() can change skb->data and thus packet header pointers,
    don't use stale ip_hdr reference in ip_tunnel code.

17) Backend state transition handling fixes in xen-netback, from Paul
    Durrant.

18) Packet offset for AH protocol is handled wrong in flow dissector,
    from Eric Dumazet.

19) Taking down an fq packet scheduler instance can leave stale packets
    in the queues, fix from Eric Dumazet.

20) Fix performance regressions introduced by TCP Small Queues.  From
    Eric Dumazet.

21) IPV6 GRE tunneling code calculates max_headroom incorrectly, from
    Hannes Frederic Sowa.

22) Multicast timer handlers in ipv4 and ipv6 can be the last and final
    reference to the ipv4/ipv6 specific network device state, so use the
    reference put that will check and release the object if the
    reference hits zero.  From Salam Noureddine.

23) Fix memory corruption in ip_tunnel driver, and use skb_push()
    instead of __skb_push() so that similar bugs are less hard to find.
    From Steffen Klassert.

24) Add forgotten hookup of rtnl_ops in SIT and ip6tnl drivers, from
    Nicolas Dichtel.

25) fq scheduler doesn't accurately rate limit in certain circumstances,
    from Eric Dumazet.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (103 commits)
  pkt_sched: fq: rate limiting improvements
  ip6tnl: allow to use rtnl ops on fb tunnel
  sit: allow to use rtnl ops on fb tunnel
  ip_tunnel: Remove double unregister of the fallback device
  ip_tunnel_core: Change __skb_push back to skb_push
  ip_tunnel: Add fallback tunnels to the hash lists
  ip_tunnel: Fix a memory corruption in ip_tunnel_xmit
  qlcnic: Fix SR-IOV configuration
  ll_temac: Reset dma descriptors indexes on ndo_open
  skbuff: size of hole is wrong in a comment
  ipv6 mcast: use in6_dev_put in timer handlers instead of __in6_dev_put
  ipv4 igmp: use in_dev_put in timer handlers instead of __in_dev_put
  ethernet: moxa: fix incorrect placement of __initdata tag
  ipv6: gre: correct calculation of max_headroom
  powerpc/83xx: gianfar_ptp: select 1588 clock source through dts file
  Revert "powerpc/83xx: gianfar_ptp: select 1588 clock source through dts file"
  bonding: Fix broken promiscuity reference counting issue
  tcp: TSQ can use a dynamic limit
  dm9601: fix IFF_ALLMULTI handling
  pkt_sched: fq: qdisc dismantle fixes
  ...
master
Linus Torvalds 8 years ago
parent
commit
c31eeaced2
  1. 18
      Documentation/devicetree/bindings/net/fsl-tsec-phy.txt
  2. 1
      MAINTAINERS
  3. 49
      drivers/bcma/driver_pci.c
  4. 2
      drivers/bluetooth/ath3k.c
  5. 5
      drivers/bluetooth/btusb.c
  6. 13
      drivers/net/bonding/bond_main.c
  7. 12
      drivers/net/can/flexcan.c
  8. 139
      drivers/net/can/slcan.c
  9. 15
      drivers/net/can/usb/peak_usb/pcan_usb_core.c
  10. 3
      drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
  11. 189
      drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
  12. 24
      drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
  13. 5
      drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
  14. 50
      drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
  15. 2
      drivers/net/ethernet/emulex/benet/be.h
  16. 9
      drivers/net/ethernet/emulex/benet/be_cmds.c
  17. 4
      drivers/net/ethernet/emulex/benet/be_cmds.h
  18. 76
      drivers/net/ethernet/emulex/benet/be_main.c
  19. 4
      drivers/net/ethernet/freescale/gianfar_ptp.c
  20. 7
      drivers/net/ethernet/intel/i40e/i40e_adminq.c
  21. 2
      drivers/net/ethernet/intel/i40e/i40e_common.c
  22. 162
      drivers/net/ethernet/intel/i40e/i40e_main.c
  23. 3
      drivers/net/ethernet/intel/igb/igb_ethtool.c
  24. 9
      drivers/net/ethernet/marvell/skge.c
  25. 2
      drivers/net/ethernet/moxa/moxart_ether.c
  26. 8
      drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
  27. 39
      drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
  28. 8
      drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
  29. 12
      drivers/net/ethernet/qlogic/qlcnic/qlcnic_sysfs.c
  30. 4
      drivers/net/ethernet/qlogic/qlge/qlge_dbg.c
  31. 2
      drivers/net/ethernet/qlogic/qlge/qlge_mpi.c
  32. 10
      drivers/net/ethernet/sfc/mcdi.c
  33. 9
      drivers/net/ethernet/via/via-rhine.c
  34. 6
      drivers/net/ethernet/xilinx/ll_temac_main.c
  35. 3
      drivers/net/slip/slip.c
  36. 2
      drivers/net/usb/dm9601.c
  37. 2
      drivers/net/usb/qmi_wwan.c
  38. 27
      drivers/net/usb/usbnet.c
  39. 9
      drivers/net/vxlan.c
  40. 7
      drivers/net/wireless/ath/ath9k/recv.c
  41. 17
      drivers/net/wireless/ath/ath9k/xmit.c
  42. 28
      drivers/net/wireless/brcm80211/brcmfmac/bcmsdh_sdmmc.c
  43. 3
      drivers/net/wireless/brcm80211/brcmfmac/dhd_bus.h
  44. 14
      drivers/net/wireless/brcm80211/brcmfmac/dhd_linux.c
  45. 2
      drivers/net/wireless/brcm80211/brcmfmac/usb.c
  46. 4
      drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c
  47. 26
      drivers/net/wireless/cw1200/cw1200_spi.c
  48. 2
      drivers/net/wireless/cw1200/fwio.c
  49. 1
      drivers/net/wireless/cw1200/hwbus.h
  50. 15
      drivers/net/wireless/cw1200/hwio.c
  51. 3
      drivers/net/wireless/mwifiex/11n_aggr.c
  52. 2
      drivers/net/wireless/mwifiex/11n_aggr.h
  53. 5
      drivers/net/wireless/mwifiex/cmdevt.c
  54. 7
      drivers/net/wireless/mwifiex/usb.c
  55. 3
      drivers/net/wireless/mwifiex/wmm.c
  56. 2
      drivers/net/wireless/p54/p54usb.c
  57. 2
      drivers/net/wireless/rtlwifi/wifi.h
  58. 148
      drivers/net/xen-netback/xenbus.c
  59. 1
      include/linux/bcma/bcma_driver_pci.h
  60. 11
      include/linux/kernel.h
  61. 2
      include/linux/skbuff.h
  62. 1
      include/linux/usb/usbnet.h
  63. 4
      include/net/addrconf.h
  64. 1
      include/net/bluetooth/hci.h
  65. 9
      include/net/ip_vs.h
  66. 1
      include/net/mrp.h
  67. 1
      include/net/net_namespace.h
  68. 2
      include/net/netfilter/nf_conntrack_synproxy.h
  69. 1
      include/net/secure_seq.h
  70. 5
      include/net/sock.h
  71. 2
      lib/hexdump.c
  72. 27
      net/802/mrp.c
  73. 26
      net/bluetooth/hci_core.c
  74. 6
      net/bluetooth/hci_event.c
  75. 7
      net/bluetooth/l2cap_core.c
  76. 35
      net/bluetooth/rfcomm/tty.c
  77. 49
      net/core/dev.c
  78. 4
      net/core/flow_dissector.c
  79. 27
      net/core/secure_seq.c
  80. 4
      net/ipv4/af_inet.c
  81. 4
      net/ipv4/igmp.c
  82. 22
      net/ipv4/ip_tunnel.c
  83. 2
      net/ipv4/ip_tunnel_core.c
  84. 10
      net/ipv4/netfilter/ipt_SYNPROXY.c
  85. 4
      net/ipv4/raw.c
  86. 17
      net/ipv4/tcp_output.c
  87. 2
      net/ipv4/udp.c
  88. 79
      net/ipv6/addrconf.c
  89. 4
      net/ipv6/ip6_gre.c
  90. 53
      net/ipv6/ip6_output.c
  91. 3
      net/ipv6/ip6_tunnel.c
  92. 6
      net/ipv6/mcast.c
  93. 10
      net/ipv6/netfilter/ip6t_SYNPROXY.c
  94. 4
      net/ipv6/raw.c
  95. 86
      net/ipv6/sit.c
  96. 4
      net/ipv6/udp.c
  97. 1
      net/lapb/lapb_timer.c
  98. 12
      net/netfilter/ipvs/ip_vs_core.c
  99. 86
      net/netfilter/ipvs/ip_vs_ctl.c
  100. 4
      net/netfilter/ipvs/ip_vs_est.c

18
Documentation/devicetree/bindings/net/fsl-tsec-phy.txt

@ -86,6 +86,7 @@ General Properties:
Clock Properties:
- fsl,cksel Timer reference clock source.
- fsl,tclk-period Timer reference clock period in nanoseconds.
- fsl,tmr-prsc Prescaler, divides the output clock.
- fsl,tmr-add Frequency compensation value.
@ -97,7 +98,7 @@ Clock Properties:
clock. You must choose these carefully for the clock to work right.
Here is how to figure good values:
TimerOsc = system clock MHz
TimerOsc = selected reference clock MHz
tclk_period = desired clock period nanoseconds
NominalFreq = 1000 / tclk_period MHz
FreqDivRatio = TimerOsc / NominalFreq (must be greater that 1.0)
@ -114,6 +115,20 @@ Clock Properties:
Pulse Per Second (PPS) signal, since this will be offered to the PPS
subsystem to synchronize the Linux clock.
Reference clock source is determined by the value, which is holded
in CKSEL bits in TMR_CTRL register. "fsl,cksel" property keeps the
value, which will be directly written in those bits, that is why,
according to reference manual, the next clock sources can be used:
<0> - external high precision timer reference clock (TSEC_TMR_CLK
input is used for this purpose);
<1> - eTSEC system clock;
<2> - eTSEC1 transmit clock;
<3> - RTC clock input.
When this attribute is not used, eTSEC system clock will serve as
IEEE 1588 timer reference clock.
Example:
ptp_clock@24E00 {
@ -121,6 +136,7 @@ Example:
reg = <0x24E00 0xB0>;
interrupts = <12 0x8 13 0x8>;
interrupt-parent = < &ipic >;
fsl,cksel = <1>;
fsl,tclk-period = <10>;
fsl,tmr-prsc = <100>;
fsl,tmr-add = <0x999999A4>;

1
MAINTAINERS

@ -9378,6 +9378,7 @@ F: arch/arm64/include/asm/xen/
XEN NETWORK BACKEND DRIVER
M: Ian Campbell <ian.campbell@citrix.com>
M: Wei Liu <wei.liu2@citrix.com>
L: xen-devel@lists.xenproject.org (moderated for non-subscribers)
L: netdev@vger.kernel.org
S: Supported

49
drivers/bcma/driver_pci.c

@ -210,25 +210,6 @@ static void bcma_core_pci_config_fixup(struct bcma_drv_pci *pc)
}
}
static void bcma_core_pci_power_save(struct bcma_drv_pci *pc, bool up)
{
u16 data;
if (pc->core->id.rev >= 15 && pc->core->id.rev <= 20) {
data = up ? 0x74 : 0x7C;
bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7F64);
bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data);
} else if (pc->core->id.rev >= 21 && pc->core->id.rev <= 22) {
data = up ? 0x75 : 0x7D;
bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7E65);
bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data);
}
}
/**************************************************
* Init.
**************************************************/
@ -255,6 +236,32 @@ void bcma_core_pci_init(struct bcma_drv_pci *pc)
bcma_core_pci_clientmode_init(pc);
}
void bcma_core_pci_power_save(struct bcma_bus *bus, bool up)
{
struct bcma_drv_pci *pc;
u16 data;
if (bus->hosttype != BCMA_HOSTTYPE_PCI)
return;
pc = &bus->drv_pci[0];
if (pc->core->id.rev >= 15 && pc->core->id.rev <= 20) {
data = up ? 0x74 : 0x7C;
bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7F64);
bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data);
} else if (pc->core->id.rev >= 21 && pc->core->id.rev <= 22) {
data = up ? 0x75 : 0x7D;
bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7E65);
bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data);
}
}
EXPORT_SYMBOL_GPL(bcma_core_pci_power_save);
int bcma_core_pci_irq_ctl(struct bcma_drv_pci *pc, struct bcma_device *core,
bool enable)
{
@ -310,8 +317,6 @@ void bcma_core_pci_up(struct bcma_bus *bus)
pc = &bus->drv_pci[0];
bcma_core_pci_power_save(pc, true);
bcma_core_pci_extend_L1timer(pc, true);
}
EXPORT_SYMBOL_GPL(bcma_core_pci_up);
@ -326,7 +331,5 @@ void bcma_core_pci_down(struct bcma_bus *bus)
pc = &bus->drv_pci[0];
bcma_core_pci_extend_L1timer(pc, false);
bcma_core_pci_power_save(pc, false);
}
EXPORT_SYMBOL_GPL(bcma_core_pci_down);

2
drivers/bluetooth/ath3k.c

@ -85,6 +85,7 @@ static struct usb_device_id ath3k_table[] = {
{ USB_DEVICE(0x04CA, 0x3008) },
{ USB_DEVICE(0x13d3, 0x3362) },
{ USB_DEVICE(0x0CF3, 0xE004) },
{ USB_DEVICE(0x0CF3, 0xE005) },
{ USB_DEVICE(0x0930, 0x0219) },
{ USB_DEVICE(0x0489, 0xe057) },
{ USB_DEVICE(0x13d3, 0x3393) },
@ -126,6 +127,7 @@ static struct usb_device_id ath3k_blist_tbl[] = {
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },

5
drivers/bluetooth/btusb.c

@ -102,6 +102,7 @@ static struct usb_device_id btusb_table[] = {
/* Broadcom BCM20702A0 */
{ USB_DEVICE(0x0b05, 0x17b5) },
{ USB_DEVICE(0x0b05, 0x17cb) },
{ USB_DEVICE(0x04ca, 0x2003) },
{ USB_DEVICE(0x0489, 0xe042) },
{ USB_DEVICE(0x413c, 0x8197) },
@ -112,6 +113,9 @@ static struct usb_device_id btusb_table[] = {
/*Broadcom devices with vendor specific id */
{ USB_VENDOR_AND_INTERFACE_INFO(0x0a5c, 0xff, 0x01, 0x01) },
/* Belkin F8065bf - Broadcom based */
{ USB_VENDOR_AND_INTERFACE_INFO(0x050d, 0xff, 0x01, 0x01) },
{ } /* Terminating entry */
};
@ -148,6 +152,7 @@ static struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },

13
drivers/net/bonding/bond_main.c

@ -1724,6 +1724,7 @@ static int __bond_release_one(struct net_device *bond_dev,
struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave, *oldcurrent;
struct sockaddr addr;
int old_flags = bond_dev->flags;
netdev_features_t old_features = bond_dev->features;
/* slave is not a slave or master is not master of this slave */
@ -1855,12 +1856,18 @@ static int __bond_release_one(struct net_device *bond_dev,
* bond_change_active_slave(..., NULL)
*/
if (!USES_PRIMARY(bond->params.mode)) {
/* unset promiscuity level from slave */
if (bond_dev->flags & IFF_PROMISC)
/* unset promiscuity level from slave
* NOTE: The NETDEV_CHANGEADDR call above may change the value
* of the IFF_PROMISC flag in the bond_dev, but we need the
* value of that flag before that change, as that was the value
* when this slave was attached, so we cache at the start of the
* function and use it here. Same goes for ALLMULTI below
*/
if (old_flags & IFF_PROMISC)
dev_set_promiscuity(slave_dev, -1);
/* unset allmulti level from slave */
if (bond_dev->flags & IFF_ALLMULTI)
if (old_flags & IFF_ALLMULTI)
dev_set_allmulti(slave_dev, -1);
bond_hw_addr_flush(bond_dev, slave_dev);

12
drivers/net/can/flexcan.c

@ -702,7 +702,6 @@ static int flexcan_chip_start(struct net_device *dev)
{
struct flexcan_priv *priv = netdev_priv(dev);
struct flexcan_regs __iomem *regs = priv->base;
unsigned int i;
int err;
u32 reg_mcr, reg_ctrl;
@ -772,17 +771,6 @@ static int flexcan_chip_start(struct net_device *dev)
netdev_dbg(dev, "%s: writing ctrl=0x%08x", __func__, reg_ctrl);
flexcan_write(reg_ctrl, &regs->ctrl);
for (i = 0; i < ARRAY_SIZE(regs->cantxfg); i++) {
flexcan_write(0, &regs->cantxfg[i].can_ctrl);
flexcan_write(0, &regs->cantxfg[i].can_id);
flexcan_write(0, &regs->cantxfg[i].data[0]);
flexcan_write(0, &regs->cantxfg[i].data[1]);
/* put MB into rx queue */
flexcan_write(FLEXCAN_MB_CNT_CODE(0x4),
&regs->cantxfg[i].can_ctrl);
}
/* acceptance mask/acceptance code (accept everything) */
flexcan_write(0x0, &regs->rxgmask);
flexcan_write(0x0, &regs->rx14mask);

139
drivers/net/can/slcan.c

@ -76,6 +76,10 @@ MODULE_PARM_DESC(maxdev, "Maximum number of slcan interfaces");
/* maximum rx buffer len: extended CAN frame with timestamp */
#define SLC_MTU (sizeof("T1111222281122334455667788EA5F\r")+1)
#define SLC_CMD_LEN 1
#define SLC_SFF_ID_LEN 3
#define SLC_EFF_ID_LEN 8
struct slcan {
int magic;
@ -142,47 +146,63 @@ static void slc_bump(struct slcan *sl)
{
struct sk_buff *skb;
struct can_frame cf;
int i, dlc_pos, tmp;
unsigned long ultmp;
char cmd = sl->rbuff[0];
if ((cmd != 't') && (cmd != 'T') && (cmd != 'r') && (cmd != 'R'))
int i, tmp;
u32 tmpid;
char *cmd = sl->rbuff;
cf.can_id = 0;
switch (*cmd) {
case 'r':
cf.can_id = CAN_RTR_FLAG;
/* fallthrough */
case 't':
/* store dlc ASCII value and terminate SFF CAN ID string */
cf.can_dlc = sl->rbuff[SLC_CMD_LEN + SLC_SFF_ID_LEN];
sl->rbuff[SLC_CMD_LEN + SLC_SFF_ID_LEN] = 0;
/* point to payload data behind the dlc */
cmd += SLC_CMD_LEN + SLC_SFF_ID_LEN + 1;
break;
case 'R':
cf.can_id = CAN_RTR_FLAG;
/* fallthrough */
case 'T':
cf.can_id |= CAN_EFF_FLAG;
/* store dlc ASCII value and terminate EFF CAN ID string */
cf.can_dlc = sl->rbuff[SLC_CMD_LEN + SLC_EFF_ID_LEN];
sl->rbuff[SLC_CMD_LEN + SLC_EFF_ID_LEN] = 0;
/* point to payload data behind the dlc */
cmd += SLC_CMD_LEN + SLC_EFF_ID_LEN + 1;
break;
default:
return;
}
if (cmd & 0x20) /* tiny chars 'r' 't' => standard frame format */
dlc_pos = 4; /* dlc position tiiid */
else
dlc_pos = 9; /* dlc position Tiiiiiiiid */
if (!((sl->rbuff[dlc_pos] >= '0') && (sl->rbuff[dlc_pos] < '9')))
if (kstrtou32(sl->rbuff + SLC_CMD_LEN, 16, &tmpid))
return;
cf.can_dlc = sl->rbuff[dlc_pos] - '0'; /* get can_dlc from ASCII val */
cf.can_id |= tmpid;
sl->rbuff[dlc_pos] = 0; /* terminate can_id string */
if (kstrtoul(sl->rbuff+1, 16, &ultmp))
/* get can_dlc from sanitized ASCII value */
if (cf.can_dlc >= '0' && cf.can_dlc < '9')
cf.can_dlc -= '0';
else
return;
cf.can_id = ultmp;
if (!(cmd & 0x20)) /* NO tiny chars => extended frame format */
cf.can_id |= CAN_EFF_FLAG;
if ((cmd | 0x20) == 'r') /* RTR frame */
cf.can_id |= CAN_RTR_FLAG;
*(u64 *) (&cf.data) = 0; /* clear payload */
for (i = 0, dlc_pos++; i < cf.can_dlc; i++) {
tmp = hex_to_bin(sl->rbuff[dlc_pos++]);
if (tmp < 0)
return;
cf.data[i] = (tmp << 4);
tmp = hex_to_bin(sl->rbuff[dlc_pos++]);
if (tmp < 0)
return;
cf.data[i] |= tmp;
/* RTR frames may have a dlc > 0 but they never have any data bytes */
if (!(cf.can_id & CAN_RTR_FLAG)) {
for (i = 0; i < cf.can_dlc; i++) {
tmp = hex_to_bin(*cmd++);
if (tmp < 0)
return;
cf.data[i] = (tmp << 4);
tmp = hex_to_bin(*cmd++);
if (tmp < 0)
return;
cf.data[i] |= tmp;
}
}
skb = dev_alloc_skb(sizeof(struct can_frame) +
@ -209,7 +229,6 @@ static void slc_bump(struct slcan *sl)
/* parse tty input stream */
static void slcan_unesc(struct slcan *sl, unsigned char s)
{
if ((s == '\r') || (s == '\a')) { /* CR or BEL ends the pdu */
if (!test_and_clear_bit(SLF_ERROR, &sl->flags) &&
(sl->rcount > 4)) {
@ -236,27 +255,46 @@ static void slcan_unesc(struct slcan *sl, unsigned char s)
/* Encapsulate one can_frame and stuff into a TTY queue. */
static void slc_encaps(struct slcan *sl, struct can_frame *cf)
{
int actual, idx, i;
char cmd;
int actual, i;
unsigned char *pos;
unsigned char *endpos;
canid_t id = cf->can_id;
pos = sl->xbuff;
if (cf->can_id & CAN_RTR_FLAG)
cmd = 'R'; /* becomes 'r' in standard frame format */
*pos = 'R'; /* becomes 'r' in standard frame format (SFF) */
else
cmd = 'T'; /* becomes 't' in standard frame format */
*pos = 'T'; /* becomes 't' in standard frame format (SSF) */
if (cf->can_id & CAN_EFF_FLAG)
sprintf(sl->xbuff, "%c%08X%d", cmd,
cf->can_id & CAN_EFF_MASK, cf->can_dlc);
else
sprintf(sl->xbuff, "%c%03X%d", cmd | 0x20,
cf->can_id & CAN_SFF_MASK, cf->can_dlc);
/* determine number of chars for the CAN-identifier */
if (cf->can_id & CAN_EFF_FLAG) {
id &= CAN_EFF_MASK;
endpos = pos + SLC_EFF_ID_LEN;
} else {
*pos |= 0x20; /* convert R/T to lower case for SFF */
id &= CAN_SFF_MASK;
endpos = pos + SLC_SFF_ID_LEN;
}
idx = strlen(sl->xbuff);
/* build 3 (SFF) or 8 (EFF) digit CAN identifier */
pos++;
while (endpos >= pos) {
*endpos-- = hex_asc_upper[id & 0xf];
id >>= 4;
}
pos += (cf->can_id & CAN_EFF_FLAG) ? SLC_EFF_ID_LEN : SLC_SFF_ID_LEN;
for (i = 0; i < cf->can_dlc; i++)
sprintf(&sl->xbuff[idx + 2*i], "%02X", cf->data[i]);
*pos++ = cf->can_dlc + '0';
/* RTR frames may have a dlc > 0 but they never have any data bytes */
if (!(cf->can_id & CAN_RTR_FLAG)) {
for (i = 0; i < cf->can_dlc; i++)
pos = hex_byte_pack_upper(pos, cf->data[i]);
}
strcat(sl->xbuff, "\r"); /* add terminating character */
*pos++ = '\r';
/* Order of next two lines is *very* important.
* When we are sending a little amount of data,
@ -267,8 +305,8 @@ static void slc_encaps(struct slcan *sl, struct can_frame *cf)
* 14 Oct 1994 Dmitry Gorodchanin.
*/
set_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
actual = sl->tty->ops->write(sl->tty, sl->xbuff, strlen(sl->xbuff));
sl->xleft = strlen(sl->xbuff) - actual;
actual = sl->tty->ops->write(sl->tty, sl->xbuff, pos - sl->xbuff);
sl->xleft = (pos - sl->xbuff) - actual;
sl->xhead = sl->xbuff + actual;
sl->dev->stats.tx_bytes += cf->can_dlc;
}
@ -286,11 +324,13 @@ static void slcan_write_wakeup(struct tty_struct *tty)
if (!sl || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev))
return;
spin_lock(&sl->lock);
if (sl->xleft <= 0) {
/* Now serial buffer is almost free & we can start
* transmission of another packet */
sl->dev->stats.tx_packets++;
clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
spin_unlock(&sl->lock);
netif_wake_queue(sl->dev);
return;
}
@ -298,6 +338,7 @@ static void slcan_write_wakeup(struct tty_struct *tty)
actual = tty->ops->write(tty, sl->xhead, sl->xleft);
sl->xleft -= actual;
sl->xhead += actual;
spin_unlock(&sl->lock);
}
/* Send a can_frame to a TTY queue. */

15
drivers/net/can/usb/peak_usb/pcan_usb_core.c

@ -463,7 +463,7 @@ static int peak_usb_start(struct peak_usb_device *dev)
if (i < PCAN_USB_MAX_TX_URBS) {
if (i == 0) {
netdev_err(netdev, "couldn't setup any tx URB\n");
return err;
goto err_tx;
}
netdev_warn(netdev, "tx performance may be slow\n");
@ -472,7 +472,7 @@ static int peak_usb_start(struct peak_usb_device *dev)
if (dev->adapter->dev_start) {
err = dev->adapter->dev_start(dev);
if (err)
goto failed;
goto err_adapter;
}
dev->state |= PCAN_USB_STATE_STARTED;
@ -481,19 +481,26 @@ static int peak_usb_start(struct peak_usb_device *dev)
if (dev->adapter->dev_set_bus) {
err = dev->adapter->dev_set_bus(dev, 1);
if (err)
goto failed;
goto err_adapter;
}
dev->can.state = CAN_STATE_ERROR_ACTIVE;
return 0;
failed:
err_adapter:
if (err == -ENODEV)
netif_device_detach(dev->netdev);
netdev_warn(netdev, "couldn't submit control: %d\n", err);
for (i = 0; i < PCAN_USB_MAX_TX_URBS; i++) {
usb_free_urb(dev->tx_contexts[i].urb);
dev->tx_contexts[i].urb = NULL;
}
err_tx:
usb_kill_anchored_urbs(&dev->rx_submitted);
return err;
}

3
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c

@ -2481,8 +2481,7 @@ load_error_cnic2:
load_error_cnic1:
bnx2x_napi_disable_cnic(bp);
/* Update the number of queues without the cnic queues */
rc = bnx2x_set_real_num_queues(bp, 0);
if (rc)
if (bnx2x_set_real_num_queues(bp, 0))
BNX2X_ERR("Unable to set real_num_queues not including cnic\n");
load_error_cnic0:
BNX2X_ERR("CNIC-related load failed\n");

189
drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c

@ -175,6 +175,7 @@ typedef int (*read_sfp_module_eeprom_func_p)(struct bnx2x_phy *phy,
#define EDC_MODE_LINEAR 0x0022
#define EDC_MODE_LIMITING 0x0044
#define EDC_MODE_PASSIVE_DAC 0x0055
#define EDC_MODE_ACTIVE_DAC 0x0066
/* ETS defines*/
#define DCBX_INVALID_COS (0xFF)
@ -3684,6 +3685,41 @@ static void bnx2x_warpcore_enable_AN_KR2(struct bnx2x_phy *phy,
bnx2x_update_link_attr(params, vars->link_attr_sync);
}
static void bnx2x_disable_kr2(struct link_params *params,
struct link_vars *vars,
struct bnx2x_phy *phy)
{
struct bnx2x *bp = params->bp;
int i;
static struct bnx2x_reg_set reg_set[] = {
/* Step 1 - Program the TX/RX alignment markers */
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL5, 0x7690},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL7, 0xe647},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL6, 0xc4f0},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL9, 0x7690},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL11, 0xe647},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL10, 0xc4f0},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL73_USERB0_CTRL, 0x000c},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL1, 0x6000},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL3, 0x0000},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CODE_FIELD, 0x0002},
{MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI1, 0x0000},
{MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI2, 0x0af7},
{MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI3, 0x0af7},
{MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_BAM_CODE, 0x0002},
{MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_UD_CODE, 0x0000}
};
DP(NETIF_MSG_LINK, "Disabling 20G-KR2\n");
for (i = 0; i < ARRAY_SIZE(reg_set); i++)
bnx2x_cl45_write(bp, phy, reg_set[i].devad, reg_set[i].reg,
reg_set[i].val);
vars->link_attr_sync &= ~LINK_ATTR_SYNC_KR2_ENABLE;
bnx2x_update_link_attr(params, vars->link_attr_sync);
vars->check_kr2_recovery_cnt = CHECK_KR2_RECOVERY_CNT;
}
static void bnx2x_warpcore_set_lpi_passthrough(struct bnx2x_phy *phy,
struct link_params *params)
{
@ -3715,7 +3751,6 @@ static void bnx2x_warpcore_enable_AN_KR(struct bnx2x_phy *phy,
struct link_params *params,
struct link_vars *vars) {
u16 lane, i, cl72_ctrl, an_adv = 0;
u16 ucode_ver;
struct bnx2x *bp = params->bp;
static struct bnx2x_reg_set reg_set[] = {
{MDIO_WC_DEVAD, MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, 0x7},
@ -3806,15 +3841,7 @@ static void bnx2x_warpcore_enable_AN_KR(struct bnx2x_phy *phy,
/* Advertise pause */
bnx2x_ext_phy_set_pause(params, phy, vars);
/* Set KR Autoneg Work-Around flag for Warpcore version older than D108
*/
bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD,
MDIO_WC_REG_UC_INFO_B1_VERSION, &ucode_ver);
if (ucode_ver < 0xd108) {
DP(NETIF_MSG_LINK, "Enable AN KR work-around. WC ver:0x%x\n",
ucode_ver);
vars->rx_tx_asic_rst = MAX_KR_LINK_RETRY;
}
vars->rx_tx_asic_rst = MAX_KR_LINK_RETRY;
bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD,
MDIO_WC_REG_DIGITAL5_MISC7, 0x100);
@ -3838,6 +3865,8 @@ static void bnx2x_warpcore_enable_AN_KR(struct bnx2x_phy *phy,
bnx2x_set_aer_mmd(params, phy);
bnx2x_warpcore_enable_AN_KR2(phy, params, vars);
} else {
bnx2x_disable_kr2(params, vars, phy);
}
/* Enable Autoneg: only on the main lane */
@ -4347,20 +4376,14 @@ static void bnx2x_warpcore_config_runtime(struct bnx2x_phy *phy,
struct bnx2x *bp = params->bp;
u32 serdes_net_if;
u16 gp_status1 = 0, lnkup = 0, lnkup_kr = 0;
u16 lane = bnx2x_get_warpcore_lane(phy, params);
vars->turn_to_run_wc_rt = vars->turn_to_run_wc_rt ? 0 : 1;
if (!vars->turn_to_run_wc_rt)
return;
/* Return if there is no link partner */
if (!(bnx2x_warpcore_get_sigdet(phy, params))) {
DP(NETIF_MSG_LINK, "bnx2x_warpcore_get_sigdet false\n");
return;
}
if (vars->rx_tx_asic_rst) {
u16 lane = bnx2x_get_warpcore_lane(phy, params);
serdes_net_if = (REG_RD(bp, params->shmem_base +
offsetof(struct shmem_region, dev_info.
port_hw_config[params->port].default_cfg)) &
@ -4375,14 +4398,8 @@ static void bnx2x_warpcore_config_runtime(struct bnx2x_phy *phy,
/*10G KR*/
lnkup_kr = (gp_status1 >> (12+lane)) & 0x1;
DP(NETIF_MSG_LINK,
"gp_status1 0x%x\n", gp_status1);
if (lnkup_kr || lnkup) {
vars->rx_tx_asic_rst = 0;
DP(NETIF_MSG_LINK,
"link up, rx_tx_asic_rst 0x%x\n",
vars->rx_tx_asic_rst);
vars->rx_tx_asic_rst = 0;
} else {
/* Reset the lane to see if link comes up.*/
bnx2x_warpcore_reset_lane(bp, phy, 1);
@ -4507,10 +4524,14 @@ static void bnx2x_warpcore_config_init(struct bnx2x_phy *phy,
* enabled transmitter to avoid current leakage in case
* no module is connected
*/
if (bnx2x_is_sfp_module_plugged(phy, params))
bnx2x_sfp_module_detection(phy, params);
else
bnx2x_sfp_e3_set_transmitter(params, phy, 1);
if ((params->loopback_mode == LOOPBACK_NONE) ||
(params->loopback_mode == LOOPBACK_EXT)) {
if (bnx2x_is_sfp_module_plugged(phy, params))
bnx2x_sfp_module_detection(phy, params);
else
bnx2x_sfp_e3_set_transmitter(params,
phy, 1);
}
bnx2x_warpcore_config_sfi(phy, params);
break;
@ -5757,6 +5778,11 @@ static int bnx2x_warpcore_read_status(struct bnx2x_phy *phy,
rc = bnx2x_get_link_speed_duplex(phy, params, vars, link_up, gp_speed,
duplex);
/* In case of KR link down, start up the recovering procedure */
if ((!link_up) && (phy->media_type == ETH_PHY_KR) &&
(!(phy->flags & FLAGS_WC_DUAL_MODE)))
vars->rx_tx_asic_rst = MAX_KR_LINK_RETRY;
DP(NETIF_MSG_LINK, "duplex %x flow_ctrl 0x%x link_status 0x%x\n",
vars->duplex, vars->flow_ctrl, vars->link_status);
return rc;
@ -6507,6 +6533,11 @@ static int bnx2x_link_initialize(struct link_params *params,
params->phy[INT_PHY].config_init(phy, params, vars);
}
/* Re-read this value in case it was changed inside config_init due to
* limitations of optic module
*/
vars->line_speed = params->phy[INT_PHY].req_line_speed;
/* Init external phy*/
if (non_ext_phy) {
if (params->phy[INT_PHY].supported &
@ -8080,7 +8111,10 @@ static int bnx2x_get_edc_mode(struct bnx2x_phy *phy,
if (copper_module_type &
SFP_EEPROM_FC_TX_TECH_BITMASK_COPPER_ACTIVE) {
DP(NETIF_MSG_LINK, "Active Copper cable detected\n");
check_limiting_mode = 1;
if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT)
*edc_mode = EDC_MODE_ACTIVE_DAC;
else
check_limiting_mode = 1;
} else if (copper_module_type &
SFP_EEPROM_FC_TX_TECH_BITMASK_COPPER_PASSIVE) {
DP(NETIF_MSG_LINK,
@ -8555,6 +8589,7 @@ static void bnx2x_warpcore_set_limiting_mode(struct link_params *params,
mode = MDIO_WC_REG_UC_INFO_B1_FIRMWARE_MODE_DEFAULT;
break;
case EDC_MODE_PASSIVE_DAC:
case EDC_MODE_ACTIVE_DAC:
mode = MDIO_WC_REG_UC_INFO_B1_FIRMWARE_MODE_SFP_DAC;
break;
default:
@ -9730,32 +9765,41 @@ static int bnx2x_848xx_cmn_config_init(struct bnx2x_phy *phy,
MDIO_AN_DEVAD, MDIO_AN_REG_8481_1000T_CTRL,
an_1000_val);
/* set 100 speed advertisement */
if ((phy->req_line_speed == SPEED_AUTO_NEG) &&
(phy->speed_cap_mask &
(PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL |
PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_HALF))) {
an_10_100_val |= (1<<7);
/* Enable autoneg and restart autoneg for legacy speeds */
autoneg_val |= (1<<9 | 1<<12);
if (phy->req_duplex == DUPLEX_FULL)
/* Set 10/100 speed advertisement */
if (phy->req_line_speed == SPEED_AUTO_NEG) {
if (phy->speed_cap_mask &
PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL) {
/* Enable autoneg and restart autoneg for legacy speeds
*/
autoneg_val |= (1<<9 | 1<<12);
an_10_100_val |= (1<<8);
DP(NETIF_MSG_LINK, "Advertising 100M\n");
}
/* set 10 speed advertisement */
if (((phy->req_line_speed == SPEED_AUTO_NEG) &&
(phy->speed_cap_mask &
(PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL |
PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_HALF)) &&
(phy->supported &
(SUPPORTED_10baseT_Half |
SUPPORTED_10baseT_Full)))) {
an_10_100_val |= (1<<5);
autoneg_val |= (1<<9 | 1<<12);
if (phy->req_duplex == DUPLEX_FULL)
DP(NETIF_MSG_LINK, "Advertising 100M-FD\n");
}
if (phy->speed_cap_mask &
PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_HALF) {
/* Enable autoneg and restart autoneg for legacy speeds
*/
autoneg_val |= (1<<9 | 1<<12);
an_10_100_val |= (1<<7);
DP(NETIF_MSG_LINK, "Advertising 100M-HD\n");
}
if ((phy->speed_cap_mask &
PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL) &&
(phy->supported & SUPPORTED_10baseT_Full)) {
an_10_100_val |= (1<<6);
DP(NETIF_MSG_LINK, "Advertising 10M\n");
autoneg_val |= (1<<9 | 1<<12);
DP(NETIF_MSG_LINK, "Advertising 10M-FD\n");
}
if ((phy->speed_cap_mask &
PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_HALF) &&
(phy->supported & SUPPORTED_10baseT_Half)) {
an_10_100_val |= (1<<5);
autoneg_val |= (1<<9 | 1<<12);
DP(NETIF_MSG_LINK, "Advertising 10M-HD\n");
}
}
/* Only 10/100 are allowed to work in FORCE mode */
@ -13432,43 +13476,6 @@ static void bnx2x_sfp_tx_fault_detection(struct bnx2x_phy *phy,
}
}
}
static void bnx2x_disable_kr2(struct link_params *params,
struct link_vars *vars,
struct bnx2x_phy *phy)
{
struct bnx2x *bp = params->bp;
int i;
static struct bnx2x_reg_set reg_set[] = {
/* Step 1 - Program the TX/RX alignment markers */
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL5, 0x7690},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL7, 0xe647},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL6, 0xc4f0},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL9, 0x7690},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL11, 0xe647},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL10, 0xc4f0},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL73_USERB0_CTRL, 0x000c},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL1, 0x6000},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL3, 0x0000},
{MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CODE_FIELD, 0x0002},
{MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI1, 0x0000},
{MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI2, 0x0af7},
{MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI3, 0x0af7},
{MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_BAM_CODE, 0x0002},
{MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_UD_CODE, 0x0000}
};
DP(NETIF_MSG_LINK, "Disabling 20G-KR2\n");
for (i = 0; i < ARRAY_SIZE(reg_set); i++)
bnx2x_cl45_write(bp, phy, reg_set[i].devad, reg_set[i].reg,
reg_set[i].val);
vars->link_attr_sync &= ~LINK_ATTR_SYNC_KR2_ENABLE;
bnx2x_update_link_attr(params, vars->link_attr_sync);
vars->check_kr2_recovery_cnt = CHECK_KR2_RECOVERY_CNT;
/* Restart AN on leading lane */
bnx2x_warpcore_restart_AN_KR(phy, params);
}
static void bnx2x_kr2_recovery(struct link_params *params,
struct link_vars *vars,
struct bnx2x_phy *phy)
@ -13546,6 +13553,8 @@ static void bnx2x_check_kr2_wa(struct link_params *params,
/* Disable KR2 on both lanes */
DP(NETIF_MSG_LINK, "BP=0x%x, NP=0x%x\n", base_page, next_page);
bnx2x_disable_kr2(params, vars, phy);
/* Restart AN on leading lane */
bnx2x_warpcore_restart_AN_KR(phy, params);
return;
}
}

24
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c

@ -4703,6 +4703,14 @@ bool bnx2x_chk_parity_attn(struct bnx2x *bp, bool *global, bool print)
attn.sig[3] = REG_RD(bp,
MISC_REG_AEU_AFTER_INVERT_4_FUNC_0 +
port*4);
/* Since MCP attentions can't be disabled inside the block, we need to
* read AEU registers to see whether they're currently disabled
*/
attn.sig[3] &= ((REG_RD(bp,
!port ? MISC_REG_AEU_ENABLE4_FUNC_0_OUT_0
: MISC_REG_AEU_ENABLE4_FUNC_1_OUT_0) &
MISC_AEU_ENABLE_MCP_PRTY_BITS) |
~MISC_AEU_ENABLE_MCP_PRTY_BITS);
if (!CHIP_IS_E1x(bp))
attn.sig[4] = REG_RD(bp,
@ -5447,26 +5455,24 @@ static void bnx2x_timer(unsigned long data)
if (IS_PF(bp) &&
!BP_NOMCP(bp)) {
int mb_idx = BP_FW_MB_IDX(bp);
u32 drv_pulse;
u32 mcp_pulse;
u16 drv_pulse;
u16 mcp_pulse;
++bp->fw_drv_pulse_wr_seq;
bp->fw_drv_pulse_wr_seq &= DRV_PULSE_SEQ_MASK;
/* TBD - add SYSTEM_TIME */
drv_pulse = bp->fw_drv_pulse_wr_seq;
bnx2x_drv_pulse(bp);
mcp_pulse = (SHMEM_RD(bp, func_mb[mb_idx].mcp_pulse_mb) &
MCP_PULSE_SEQ_MASK);
/* The delta between driver pulse and mcp response
* should be 1 (before mcp response) or 0 (after mcp response)
* should not get too big. If the MFW is more than 5 pulses
* behind, we should worry about it enough to generate an error
* log.
*/
if ((drv_pulse != mcp_pulse) &&
(drv_pulse != ((mcp_pulse + 1) & MCP_PULSE_SEQ_MASK))) {
/* someone lost a heartbeat... */
BNX2X_ERR("drv_pulse (0x%x) != mcp_pulse (0x%x)\n",
if (((drv_pulse - mcp_pulse) & MCP_PULSE_SEQ_MASK) > 5)
BNX2X_ERR("MFW seems hanged: drv_pulse (0x%x) != mcp_pulse (0x%x)\n",
drv_pulse, mcp_pulse);
}
}
if (bp->state == BNX2X_STATE_OPEN)

5
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c

@ -1819,7 +1819,7 @@ bnx2x_get_vf_igu_cam_info(struct bnx2x *bp)
fid = GET_FIELD((val), IGU_REG_MAPPING_MEMORY_FID);
if (fid & IGU_FID_ENCODE_IS_PF)
current_pf = fid & IGU_FID_PF_NUM_MASK;
else if (current_pf == BP_ABS_FUNC(bp))
else if (current_pf == BP_FUNC(bp))
bnx2x_vf_set_igu_info(bp, sb_id,
(fid & IGU_FID_VF_NUM_MASK));
DP(BNX2X_MSG_IOV, "%s[%d], igu_sb_id=%d, msix=%d\n",
@ -3180,6 +3180,7 @@ int bnx2x_enable_sriov(struct bnx2x *bp)
/* set local queue arrays */
vf->vfqs = &bp->vfdb->vfqs[qcount];
qcount += vf_sb_count(vf);
bnx2x_iov_static_resc(bp, vf);
}
/* prepare msix vectors in VF configuration space */
@ -3187,6 +3188,8 @@ int bnx2x_enable_sriov(struct bnx2x *bp)
bnx2x_pretend_func(bp, HW_VF_HANDLE(bp, vf_idx));
REG_WR(bp, PCICFG_OFFSET + GRC_CONFIG_REG_VF_MSIX_CONTROL,
num_vf_queues);
DP(BNX2X_MSG_IOV, "set msix vec num in VF %d cfg space to %d\n",
vf_idx, num_vf_queues);
}
bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));

50
drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c

@ -1765,28 +1765,28 @@ static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
switch (mbx->first_tlv.tl.type) {
case CHANNEL_TLV_ACQUIRE:
bnx2x_vf_mbx_acquire(bp, vf, mbx);
break;
return;
case CHANNEL_TLV_INIT:
bnx2x_vf_mbx_init_vf(bp, vf, mbx);
break;
return;
case CHANNEL_TLV_SETUP_Q:
bnx2x_vf_mbx_setup_q(bp, vf, mbx);
break;
return;
case CHANNEL_TLV_SET_Q_FILTERS:
bnx2x_vf_mbx_set_q_filters(bp, vf, mbx);
break;
return;
case CHANNEL_TLV_TEARDOWN_Q:
bnx2x_vf_mbx_teardown_q(bp, vf, mbx);
break;
return;
case CHANNEL_TLV_CLOSE:
bnx2x_vf_mbx_close_vf(bp, vf, mbx);
break;
return;
case CHANNEL_TLV_RELEASE:
bnx2x_vf_mbx_release_vf(bp, vf, mbx);
break;
return;
case CHANNEL_TLV_UPDATE_RSS:
bnx2x_vf_mbx_update_rss(bp, vf, mbx);
break;
return;
}
} else {
@ -1802,26 +1802,24 @@ static void bnx2x_vf_mbx_request(struct bnx2x *bp, struct bnx2x_virtf *vf,
for (i = 0; i < 20; i++)
DP_CONT(BNX2X_MSG_IOV, "%x ",
mbx->msg->req.tlv_buf_size.tlv_buffer[i]);
}
/* test whether we can respond to the VF (do we have an address
* for it?)
*/
if (vf->state == VF_ACQUIRED || vf->state == VF_ENABLED) {
/* mbx_resp uses the op_rc of the VF */
vf->op_rc = PFVF_STATUS_NOT_SUPPORTED;
/* can we respond to VF (do we have an address for it?) */
if (vf->state == VF_ACQUIRED || vf->state == VF_ENABLED) {
/* mbx_resp uses the op_rc of the VF */
vf->op_rc = PFVF_STATUS_NOT_SUPPORTED;
/* notify the VF that we do not support this request */
bnx2x_vf_mbx_resp(bp, vf);
} else {
/* can't send a response since this VF is unknown to us
* just ack the FW to release the mailbox and unlock
* the channel.
*/
storm_memset_vf_mbx_ack(bp, vf->abs_vfid);
mmiowb();
bnx2x_unlock_vf_pf_channel(bp, vf,
mbx->first_tlv.tl.type);
}
/* notify the VF that we do not support this request */
bnx2x_vf_mbx_resp(bp, vf);
} else {
/* can't send a response since this VF is unknown to us
* just ack the FW to release the mailbox and unlock
* the channel.
*/
storm_memset_vf_mbx_ack(bp, vf->abs_vfid);
/* Firmware ack should be written before unlocking channel */
mmiowb();
bnx2x_unlock_vf_pf_channel(bp, vf, mbx->first_tlv.tl.type);
}
}

2
drivers/net/ethernet/emulex/benet/be.h

@ -88,6 +88,7 @@ static inline char *nic_name(struct pci_dev *pdev)
#define BE_MIN_MTU 256
#define BE_NUM_VLANS_SUPPORTED 64
#define BE_UMC_NUM_VLANS_SUPPORTED 15
#define BE_MAX_EQD 96u
#define BE_MAX_TX_FRAG_COUNT 30
@ -333,6 +334,7 @@ enum vf_state {
#define BE_FLAGS_LINK_STATUS_INIT 1
#define BE_FLAGS_WORKER_SCHEDULED (1 << 3)
#define BE_FLAGS_VLAN_PROMISC (1 << 4)
#define BE_FLAGS_NAPI_ENABLED (1 << 9)
#define BE_UC_PMAC_COUNT 30
#define BE_VF_UC_PMAC_COUNT 2

9
drivers/net/ethernet/emulex/benet/be_cmds.c

@ -180,6 +180,9 @@ static int be_mcc_compl_process(struct be_adapter *adapter,
dev_err(&adapter->pdev->dev,
"opcode %d-%d failed:status %d-%d\n",
opcode, subsystem, compl_status, extd_status);
if (extd_status == MCC_ADDL_STS_INSUFFICIENT_RESOURCES)
return extd_status;
}
}
done:
@ -1812,6 +1815,12 @@ int be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 value)
} else if (flags & IFF_ALLMULTI) {
req->if_flags_mask = req->if_flags =
cpu_to_le32(BE_IF_FLAGS_MCAST_PROMISCUOUS);
} else if (flags & BE_FLAGS_VLAN_PROMISC) {
req->if_flags_mask = cpu_to_le32(BE_IF_FLAGS_VLAN_PROMISCUOUS);
if (value == ON)
req->if_flags =
cpu_to_le32(BE_IF_FLAGS_VLAN_PROMISCUOUS);
} else {
struct netdev_hw_addr *ha;
int i = 0;

4
drivers/net/ethernet/emulex/benet/be_cmds.h

@ -60,6 +60,8 @@ enum {
MCC_STATUS_NOT_SUPPORTED = 66
};
#define MCC_ADDL_STS_INSUFFICIENT_RESOURCES 0x16
#define CQE_STATUS_COMPL_MASK 0xFFFF
#define CQE_STATUS_COMPL_SHIFT 0 /* bits 0 - 15 */
#define CQE_STATUS_EXTD_MASK 0xFFFF
@ -1791,7 +1793,7 @@ struct be_nic_res_desc {
u8 acpi_params;
u8 wol_param;
u16 rsvd7;
u32 rsvd8[3];
u32 rsvd8[7];
} __packed;
struct be_cmd_req_get_func_config {

76
drivers/net/ethernet/emulex/benet/be_main.c

@ -855,11 +855,11 @@ static struct sk_buff *be_xmit_workarounds(struct be_adapter *adapter,
unsigned int eth_hdr_len;
struct iphdr *ip;
/* Lancer ASIC has a bug wherein packets that are 32 bytes or less
/* Lancer, SH-R ASICs have a bug wherein Packets that are 32 bytes or less
* may cause a transmit stall on that port. So the work-around is to
* pad such packets to a 36-byte length.
* pad short packets (<= 32 bytes) to a 36-byte length.
*/
if (unlikely(lancer_chip(adapter) && skb->len <= 32)) {
if (unlikely(!BEx_chip(adapter) && skb->len <= 32)) {
if (skb_padto(skb, 36))
goto tx_drop;
skb->len = 36;
@ -1013,18 +1013,40 @@ static int be_vid_config(struct be_adapter *adapter)
status = be_cmd_vlan_config(adapter, adapter->if_handle,
vids, num, 1, 0);
/* Set to VLAN promisc mode as setting VLAN filter failed */
if (status) {
dev_info(&adapter->pdev->dev, "Exhausted VLAN HW filters.\n");
dev_info(&adapter->pdev->dev, "Disabling HW VLAN filtering.\n");
goto set_vlan_promisc;
/* Set to VLAN promisc mode as setting VLAN filter failed */
if (status == MCC_ADDL_STS_INSUFFICIENT_RESOURCES)
goto set_vlan_promisc;
dev_err(&adapter->pdev->dev,
"Setting HW VLAN filtering failed.\n");
} else {
if (adapter->flags & BE_FLAGS_VLAN_PROMISC) {
/* hw VLAN filtering re-enabled. */
status = be_cmd_rx_filter(adapter,
BE_FLAGS_VLAN_PROMISC, OFF);
if (!status) {
dev_info(&adapter->pdev->dev,
"Disabling VLAN Promiscuous mode.\n");
adapter->flags &= ~BE_FLAGS_VLAN_PROMISC;
dev_info(&adapter->pdev->dev,
"Re-Enabling HW VLAN filtering\n");
}
}
}
return status;
set_vlan_promisc:
status = be_cmd_vlan_config(adapter, adapter->if_handle,
NULL, 0, 1, 1);
dev_warn(&adapter->pdev->dev, "Exhausted VLAN HW filters.\n");
status = be_cmd_rx_filter(adapter, BE_FLAGS_VLAN_PROMISC, ON);
if (!status) {
dev_info(&adapter->pdev->dev, "Enable VLAN Promiscuous mode\n");
dev_info(&adapter->pdev->dev, "Disabling HW VLAN filtering\n");
adapter->flags |= BE_FLAGS_VLAN_PROMISC;
} else
dev_err(&adapter->pdev->dev,
"Failed to enable VLAN Promiscuous mode.\n");
return status;
}
@ -1033,10 +1055,6 @@ static int be_vlan_add_vid(struct net_device *netdev, __be16 proto, u16 vid)
struct be_adapter *adapter = netdev_priv(netdev);
int status = 0;
if (!lancer_chip(adapter) && !be_physfn(adapter)) {
status = -EINVAL;
goto ret;
}
/* Packets with VID 0 are always received by Lancer by default */
if (lancer_chip(adapter) && vid == 0)
@ -1059,11 +1077,6 @@ static int be_vlan_rem_vid(struct net_device *netdev, __be16 proto, u16 vid)
struct be_adapter *adapter = netdev_priv(netdev);
int status = 0;
if (!lancer_chip(adapter) && !be_physfn(adapter)) {
status = -EINVAL;
goto ret;
}
/* Packets with VID 0 are always received by Lancer by default */
if (lancer_chip(adapter) && vid == 0)
goto ret;
@ -1188,8 +1201,8 @@ static int be_get_vf_config(struct net_device *netdev, int vf,
vi->vf = vf;
vi->tx_rate = vf_cfg->tx_rate;
vi->vlan = vf_cfg->vlan_tag;
vi->qos = 0;
vi->vlan = vf_cfg->vlan_tag & VLAN_VID_MASK;
vi->qos = vf_cfg->vlan_tag >> VLAN_PRIO_SHIFT;
memcpy(&vi->mac, vf_cfg->mac_addr, ETH_ALEN);
return 0;
@ -1199,28 +1212,29 @@ static int be_set_vf_vlan(struct net_device *netdev,
int vf, u16 vlan, u8 qos)
{
struct be_adapter *adapter = netdev_priv(netdev);
struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf];
int status = 0;
if (!sriov_enabled(adapter))
return -EPERM;
if (vf >= adapter->num_vfs || vlan > 4095)
if (vf >= adapter->num_vfs || vlan > 4095 || qos > 7)
return -EINVAL;
if (vlan) {
if (adapter->vf_cfg[vf].vlan_tag != vlan) {
if (vlan || qos) {
vlan |= qos << VLAN_PRIO_SHIFT;
if (vf_cfg->vlan_tag != vlan) {
/* If this is new value, program it. Else skip. */
adapter->vf_cfg[vf].vlan_tag = vlan;
status = be_cmd_set_hsw_config(adapter, vlan,
vf + 1, adapter->vf_cfg[vf].if_handle, 0);
vf_cfg->vlan_tag = vlan;
status = be_cmd_set_hsw_config(adapter, vlan, vf + 1,
vf_cfg->if_handle, 0);
}
} else {
/* Reset Transparent Vlan Tagging. */
adapter->vf_cfg[vf].vlan_tag = 0;
vlan = adapter->vf_cfg[vf].def_vid;
vf_cfg->vlan_tag = 0;
vlan = vf_cfg->def_vid;
status = be_cmd_set_hsw_config(adapter, vlan, vf + 1,
adapter->vf_cfg[vf].if_handle, 0);
vf_cfg->if_handle, 0);
}
@ -2963,6 +2977,8 @@ static void BEx_get_resources(struct be_adapter *adapter,
if (adapter->function_mode & FLEX10_MODE)
res->max_vlans = BE_NUM_VLANS_SUPPORTED/8;
else if (adapter->function_mode & UMC_ENABLED)
res->max_vlans = BE_UMC_NUM_VLANS_SUPPORTED;
else
res->max_vlans = BE_NUM_VLANS_SUPPORTED;
res->max_mcast_mac = BE_MAX_MC;

4
drivers/net/ethernet/freescale/gianfar_ptp.c

@ -452,7 +452,9 @@ static int gianfar_ptp_probe(struct platform_device *dev)
err = -ENODEV;
etsects->caps = ptp_gianfar_caps;
etsects->cksel = DEFAULT_CKSEL;
if (get_of_u32(node, "fsl,cksel", &etsects->cksel))
etsects->cksel = DEFAULT_CKSEL;
if (get_of_u32(node, "fsl,tclk-period", &etsects->tclk_period) ||
get_of_u32(node, "fsl,tmr-prsc", &etsects->tmr_prsc) ||

7
drivers/net/ethernet/intel/i40e/i40e_adminq.c

@ -701,8 +701,7 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
details = I40E_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
if (cmd_details) {
memcpy(details, cmd_details,
sizeof(struct i40e_asq_cmd_details));
*details = *cmd_details;
/* If the cmd_details are defined copy the cookie. The
* cpu_to_le32 is not needed here because the data is ignored
@ -760,7 +759,7 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
desc_on_ring = I40E_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
/* if the desc is available copy the temp desc to the right place */
memcpy(desc_on_ring, desc, sizeof(struct i40e_aq_desc));
*desc_on_ring = *desc;
/* if buff is not NULL assume indirect command */
if (buff != NULL) {
@ -807,7 +806,7 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
/* if ready, copy the desc back to temp */
if (i40e_asq_done(hw)) {
memcpy(desc, desc_on_ring, sizeof(struct i40e_aq_desc));
*desc = *desc_on_ring;
if (buff != NULL)
memcpy(buff, dma_buff->va, buff_size);
retval = le16_to_cpu(desc->retval);

2
drivers/net/ethernet/intel/i40e/i40e_common.c

@ -507,7 +507,7 @@ i40e_status i40e_aq_get_link_info(struct i40e_hw *hw,
/* save link status information */
if (link)
memcpy(link, hw_link_info, sizeof(struct i40e_link_status));
*link = *hw_link_info;
/* flag cleared so helper functions don't call AQ again */
hw->phy.get_link_info = false;

162
drivers/net/ethernet/intel/i40e/i40e_main.c

@ -101,10 +101,10 @@ int i40e_allocate_dma_mem_d(struct i40e_hw *hw, struct i40e_dma_mem *mem,
mem->size = ALIGN(size, alignment);
mem->va = dma_zalloc_coherent(&pf->pdev->dev, mem->size,
&mem->pa, GFP_KERNEL);
if (mem->va)
return 0;
if (!mem->va)
return -ENOMEM;
return -ENOMEM;
return 0;
}
/**
@ -136,10 +136,10 @@ int i40e_allocate_virt_mem_d(struct i40e_hw *hw, struct i40e_virt_mem *mem,
mem->size = size;