I can't say for sure about traffic being dropped. With LACP you'll typically only get the throughput of one NIC per TCP stream and depending on the hashing algorithm used you'll probably only ever see one NIC speeds with your setup.
I have my environment setup with MPIO and no LACP following Dell network practices (using two Force10 switches that are VLT'd). Having LACP or bonded links adds extra config overhead in my view. I'd suggest remove the aggregation, give NIC 1 and NIC 2 it's own IP and configure software iscsi per page 5 like you pointed out.
Links are bonded with HP's Trunk protocol. VMK1 and VMK2 have their own IPs, are bonded to both NICs, each one with a different primary VMNIC with the secondary as failover. They are not teamed at the VMware side. I'm not an expert on how VMware handles traffic at the NIC level, so I was hoping for some confirmation on my suspicion that there's nothing gained by aggregating the links at the switch level to the hosts, and in fact may reduce performance.
If both of the NIC's are bonded to the software iscsi initiator, that means you are running a flat subnets across both of the VMk's. Your interconnect / ISL between your two stacked switches will be a source of contention. Why? The switches have a finite cache, depending on the switch typically >9MB shared between all ports, if you are using LACP as your interconnect between your stacked switches then these ports will consume some some of the total cache. You want to eliminate traffic across your ISL, ideally one should use the bisect / even odd option (Administration > Networking > Subnet) when using a single subnet to prevent traffic across your ISL.