Update: There wasn't any data feed problem in the end. The different tick charts have to do with the migration to the new MDP 3.0 data protocol, which bundles trades and therefore shows a different tick count and behaviour than it did before. The issue was that this change was expected to happen around September, but CQC already migrated over the weekend without any warning to their users.
Update 2: Eventually everyone will have to migrate to MDP 3.0. If you want your new volume chart to look like your old tick chart, just use a volume chart 3x the size of the tick chart. Link to study
CQC has been showing problems with tick data since yesterday, missing a lot of ticks.
Both charts are 2000 tick. The one on the left has CQC as data feed and the one on the right has Rhythmic.
I read somewhere that this might be the issue:
Does anyone have any information about this migration to MDP 3.0 and if this might be the cause of the problem?
The following 11 users say Thank You to Malthus for this post:
Thanks for your question. There is a parallel conversation on this topic taking place on our forums at DTN IQFeed, DTN.IQ, DTN ProphetX Support Forums. In summary, we continue working with the exchange to try to find solutions to the fact that their new protocol bundles some trades. We made the CME Group aware of the issue months ago. At this time we don’t know if changes will be made to accommodate our/your needs, but we continue to fight for our customer’s needs nonetheless. We will be holding off on converting to MDP 3.0 until it’s absolutely necessary which means our customers are receiving unbundled trades today, no differently than they did yesterday.
Feel free to contact me with anymore questions.
Thanks
James
If you have any questions about IQFeed please send me a Private Message or use the BMT "Ask Me Anything" thread.
The following 6 users say Thank You to IQFeed James for this post:
That's gonna hose up a lot of stuff in a hurry ... lol
But the good news is, maybe we get back our beloved aggregated time and sales they (CME) took away from us back in 2009.
@Malthus ... maybe rather than looking at two charts, study the Time and Sales of each stream and see if the old stream is thousands of one lot orders and the new stream is the actual initiating trades meaning a slower stream of 10, 20, 35, 100, 300, ... lot orders.
Be Patient and Trade Smart
The following 2 users say Thank You to trendwaves for this post:
Platform: Sierra Chart, TOS, Tradestation, NinjaTrader
Trading: energy
Posts: 114 since Jul 2012
Thanks: 81 given,
171
received
FYI, i am using Sierra with CTS as my data provider, they announced a rolling migration, CME Agricultural futures and options CBOT Financial futures and options have already been migrated. I regrettably dont have historical charts to show the difference. since i primarily trade off tick charts, and actually use the volume of said tick charts as part of my primary trading system, i am extremely nervous and wondering what comes this way.
Tick charts certainly will be affected with that migration. If I got it right, you should get aggregated volume data at price with new protocol instead of every tick with legacy FIX/FAST. As well tapes of any kind will be affected.
I think the key is in HOW they bundle the ticks, there appears to be scant information on that but borrowing from the IQFeed site:
OK - so bundle by same time and price (and side hopefully) - the question that then comes to mind is how granular is that price?
2 trades cannot occur at the same time - not if you go down to a low enough time precision. Even when a large order for say 1000 contracts come in, it still needs to be filled against multiple limits and there will still be a tiny, tiny gap between them.
So maybe it's bundling trades by second (bad), millisecond (not great) or nanosecond (probably won't be a lot of bundling).
More importantly - one would presume if a trade hits the offer at 2000 and price ticks up and the next trade hits the bid at 2000 that they would not be bundled....
I have posed both questions to DTN.
I think unbundling as proposed by DTN is going to be an uphill struggle for them.
If you have any questions about the products or services provided, please send me a Private Message or use the futures.io "Ask Me Anything" thread
We were not aware that MDP 3.0 which uses simple binary encoding, bundles trades.
We have deployed it on one of our servers for the CME equities and Forex futures. With this news, we will revert back and stay away from this as long as we can.
We are going to confirm though what we are hearing in this thread.
The following 10 users say Thank You to SierraChart for this post:
I'm being told CQG switched the CME data to MDP 3.0 this weekend and its why users of CQG saw downtime this weekend. I was told this direct from CQG via a direct message on twitter. I'm heavily concerned as to why there was no email from NT or any other platforms that use CQG heavily?
The bundling of trades in MDP 3.0 can heavily effect a traders strategies and turn a winning strategy into a losing one if your using tick charts or time and sales data!
The following user says Thank You to Branzol for this post:
This may depend upon the charting software and how the chart is built in that software. If the charting software starts drawing a new bar when the sum of volume for the current bar is greater than the bar interval, then this change will affect volume bars too.
For instance, if the charting software receives 100 trades each with size=1 at the same price, then a 10 volume chart will draw 10 bars with the price being the OHLC for all 10 bars. With this change, if data is summarized and instead of 100 trades, it receives 5 trades each with size=20, then the platform may draw just 5 bars.
Site Administrator Swing Trader Data Scientist & DevOps
Manta, Ecuador
Experience: Advanced
Platform: Custom solution
Trading: Futures & Crypto
Posts: 50,008 since Jun 2009
Thanks: 32,469 given,
98,296
received
Then the platform would be at fault. A volume bar is equal to the volume you specify, no less, no more. Exceptions would be session end/start where your platform should allow you to start a new bar.
I hope and think that even if they bundle/aggregate the consecutive trades at the same price they will keep bid/ask trades separated, in this case the delta calculation will not be affected.
Take your Pips, go out and Live.
Luke.
The following user says Thank You to LukeGeniol for this post:
Strange response from a DataFeed vendor. You mean you did not read the protocol before you started using it to supply data to your clients? My first read of just the summary made it quite clear that raw data was gone, in favor of what is essentially snapshot data on a very fast time frame. And as a mere end consumer of the data, I was not even reading the announcement with any serious programming intent.
This post really scares me. A data vendor who is unaware of the format of the data that they supply? Fie!
The following 2 users say Thank You to koganam for this post:
At this point that the data will not more based on tick but on time, and this is the microsecond, I'm curious to see how many trades will be made on the same microsecond....
Site Administrator Swing Trader Data Scientist & DevOps
Manta, Ecuador
Experience: Advanced
Platform: Custom solution
Trading: Futures & Crypto
Posts: 50,008 since Jun 2009
Thanks: 32,469 given,
98,296
received
Not very many, for sure. Guessing it would be an extremely tiny percentage.
My understanding is this: The only issue here is bundling a trade. When I submit a market order for a 50 lot on ES, I get filled with lots of 1's and 2's and some 5's usually. Prior to MDP 3.0, each fill would count as a trade. Now my one order for 50 gets counted as one trade, regardless of the incremental fills, within an allowable limit of time.
I've not seen specific documentation on the elapsed time that is allowable.
It appears Ninjatrader backfill data is using the new data while real time Rithmic data is still the old format data. Makes for confusing and less than useful tick charts.
The following 3 users say Thank You to Seahn for this post:
Tick trading has grown in popularity. I know this one guy used a bunch of them, even a 70 tick chart on the CL. I tried it. Wow was that ever a data stream. It showed some nice basing action for entry. Boy, did it clog up my ninja data base though, besides making me a little dizzy. Even midday I would have to clean the NT database tick files and cache. Lucky enuff for me someone showed me how to clean that stuff up. Ninja might encourage CME to lighten the load on their platform, with a wink wink?
My CL 2400 tick chart is not as affected as the 2000 tick ES. This is one of those days I'm glad to have time based charts, volume ladder and volume charts,also. "you only need one chart to succeed in trading". Another axiom bites the dust.
Site Administrator Swing Trader Data Scientist & DevOps
Manta, Ecuador
Experience: Advanced
Platform: Custom solution
Trading: Futures & Crypto
Posts: 50,008 since Jun 2009
Thanks: 32,469 given,
98,296
received
I did a test on millisecond data on ES from Friday.
Here is a snippet. You can make assumptions on how these would be grouped together under the new protocol. Keeping in mind my data is millisecond resolution, not microsecond.
Platform: Sierra Chart, TOS, Tradestation, NinjaTrader
Trading: energy
Posts: 114 since Jul 2012
Thanks: 81 given,
171
received
i did 120 day analysis and came up with an avg of 5500 volume equivalent to 2000t on ES. i know a lot of tick chart folks use 2k. You can see in the pic, its not much different, and in many cases has cleaner price action signals. dont want to get into a tick chart discussion in this thread, just wanted to show that the alternatives are pretty clean.
The following 5 users say Thank You to hobart for this post:
I compared my realtime journal chart of the YM on a 5 BetterRenko from 4/30/15 with the current historical chart in NT (Continuum data feed from NTB). They look exactly the same, so BetterRenko folks shouldn't be affected by the data change - at least not after the bar forms (unless NT is using old historical data...)
now don't get confused about tick charts and ticks in price
renko and also range charts need a certain amount of "ticks" (as in price) before creating a new bar. it doesn't matter if a trade is reported as 1 trade with 500 contracts or 500 trades with 1 contract. with other words the meaning of a tick here is like a 1/4 point in ES.
The following 6 users say Thank You to Silvester17 for this post:
If it's showing the size of the agressor (I think that's the correct term), and not the total of everything at that time (however small that timespan might be), I think this would be superior. What I understood from what I read leaves room for both interpretations. The example was an agressor. Text didn't limit it to that...
Yeah, honestly I'm skeptical that the CME would do anything to benefit small-timeframe traders whatsoever. It'll be presented as having a bunch of great advantages, but behind the scenes there may be actual negative ramifications.
If it truly does basically go back to a pre-2009 type treatment of data, then good. But it may end up just snapshotting more data in the name of "efficiency."
Interesting thread, but there seems to be a lot of mis-understandings.
The new protocol sends thru trades using an event based model. So if an aggressor order triggers 5 other orders (i.e. buy 5 @ market triggers 5 sell orders with qty = 1), a single trade entry is sent thru. To me, this is NOT bundling, but instead is correct behavior in that a single aggressor trade filled (or partially filled). The messages for the event also contain the number of orders and the qty that was filled against each order, so that trade could be surfaced as 5 single contract trades, but while that matches todays feed behavior, it seems like that is not the way to go IMO. Would you rather know that someone bought 100 or see 100 1 car entries fly by?
Say you have a single aggressor order that triggers a bunch of orders at different prices. In this case, multiple trade entries are sent thru, one for each price. These trade entries are bundled into the same event and have the same timestamp, but should be represented as multiple trades. Again, the order fill qty is available, so it could be broken down into smaller pieces based on the matched order vs the aggressor order. Again, this is not really bundling.
There are some other fringe cases for spreads, implied trades, and misc events, but I think the above are the major ones that matter.
Another type of bundling is the bundling of multiple messages into a packet that is sent over the network. I have not seen any doc on what kind of time window is used to do this bundling, but it is required to do this in order to have an efficient transfer of data. I suspect the window is rather small so as not to affect latency.
The event based model sends thru trades marked to the nano-second, but really there is not much use for this granularity for the mere mortal, especially when your latency is measured in ms. Also, most charting platforms won't be able to record the ns anyway, so will likely be truncated back to something less granular.
Also, the aggressor is properly marked, so things like delta should be fine.
The following 16 users say Thank You to aslan for this post:
Our programmer is well aware of the format, it had to be very thoroughly understood and we went into it in a lot more depth than just reading the summary. The end result of the new CME protocol as we are currently using it was not fully understood by management. We are still only in the testing phases of using this new protocol.
We understood the data format is different, but that does not mean that we cannot break out the individual trades. There was a misunderstanding by management of the details of the protocol and the fact that our server is transmitting fewer number of trades and we are not breaking out the individual trades from the messages. This is what we meant when we were not aware of this. We have done further research and we can break out the individual trades and we will do that.
In summary, the the raw data is not gone. The new CME simple binary encoding data feed does provide it and is accessible.
The following 17 users say Thank You to SierraChart for this post:
My 2400 tick CL does look cleaner, also.
Mack from PATS says he is changing 2000ES tick chart to lower level, like 500 to 1000 ticks. Not sure what he settled on.
There is a lot of misunderstanding about MDP3 here. Please read the documentation on CME's website and you will see that it's not the bundling that is the issue. The issue is that most data providers are not willing/capable to deliver the unbundled trades to you-although CME provides that information in the raw feed.
People are worried about nothing. The only down side is a tick chart may need to be slightly adjusted in terms of number of ticks to account for a smaller number of avg ticks.
The following 5 users say Thank You to aslan for this post:
Seeing how different vendors are already looking at different ways to break apart (or not) these data messages for subsequent delivery to the end users concerns me. CME is taking the position: here is the new protocol, good luck.
There are timing issues involved in how to precisely and accurately break apart and correctly synchronize (insert) the components of a packet into the resulting output stream. This is because the stream is not just trade executions, but is happening in parallel with book update events. It is the correct and accurate ordering of these asynchronous event streams that has in the past caused significant problems for vendors.
In the current protocol, there is a well known race condition between asynchronous trade execution events with the separate stream of order book update events. This race condition, can produce a mis-ordering of events in the resulting stream produced by an invalid vendor algorithm inadvertently causing buy orders to appear as sell orders and conversely sell orders to appear as buy orders in the stream. With each vendor's home brewed algorithm we ended up with vendors implementing a variety of solutions resulting in a ranking list from good to bad of retail data streams with respect to what we call an 'unfiltered' data feed. A significant variance of the accuracy of unfiltered data feed currently exists from vendor to vendor.
This new protocol adds at least one new layer of complexity to this, and perhaps multiple layers of complexity. Based on past performance, I cannot help but wonder if any vendor will 'get it right' this time ?
Having said all that, I personally have always favored viewing the market from the perspective of the initiating trade ( 'agressor' ). I find much more value in seeing the initiating trade, (Mikes 50 lot trade), on the tape. I really don't care about seeing the unending stream of Joe 1-lot fills. I view this as a signal to noise problem. My concern is some (hopefully not all) vendors may attempt to deconstruct the initiating trade execution packet back into a useless stream of resulting 1-lot fills. My hope is the bandwidth savings (and lower implementation complexity) found in the initiating trade stream, as designed by the CME, will persuade vendors to do the right thing and keep it intact for their customers. Sometimes less is more.
Be Patient and Trade Smart
The following 4 users say Thank You to trendwaves for this post:
This dead on. The info is there, but there are some fringe cases that are not well documented. The entire MDP3 protocol is kind of like that though, very general purpose (actually too general purpose) and not trivial to implement. It is definitely optimized to send thru trades based on the aggressor order.
@SierraChart - Would prefer to have the transfer done in a single trade rather than individual trades which are split. Atleast please have an option to pick this method.
This really is not the case, as each trade in the feed is marked as a buy or sell aggressor (true for old and new protocol). There are some trades that come thru not marked (i.e. implied trades), and these trades can cause some minor differences between vendors. Now if the vendor does not send that flag thru, then what you say is totally true as they have to be marked based on the current book at the client end which is a crap shoot.
In the old protocol, it was very painful because all updates were interspersed, so you could see a packet with trades and book updates for multiple symbols in any order. So, you are correct there were some serious race conditions to watch for, but it was manageable. In the new protocol, that is mitigated with the event model, so things are not interspersed.
Another major factor is how the vendor gets and processes the data. Each vendor has to decide how to interface to the MDP stream. If you are using an off the shelf component, you may not have access to the info to do the splitting, because you are getting a massaged version of the data. If instead the vendor writes to the lowest level, then they have the info.
I think in the short term, you will see differences between vendors until the CME switches over to MDP only and people decide on showing aggressor trades vs split trades).
The following 8 users say Thank You to aslan for this post:
By the way there's a mistake in the quoted post, the variation in message size doesn't come from textual representation (I forgot we were comparing FAST and SBE) but instead that in FAST, those field values are encoded in binary to take up as few bytes as possible rather than a constant size binary representation.
Tradestation Data Integrity confirmed that they will change to the new CME data protocol this August (2015), before the CME change in September, so they have to time to trouble-shoot, etc.
Tradestation will make a formal announcement sometime this Summer to confirm the upcoming data change.
According to TS, their software engineers are aware of the impact that the new data protocol will have on Tick charts.
That said, they did not confirm nor deny if they will modify the TS platform in any way to accommodate the new data stream so as to keep their Tick charts useable.
As far as prior comments on Tick vs. Volume charts.
I have found Tick charts to be much more consistent & useable on All Volume day types (low volume days, high volume days, etc.).
Volume charts do work very well on most normal & high volume days, but are horrible on Lower Volume Days (at least on the TS platform).
On Lower Volume days, you mostly have retail traders driving the market & therefore you would need to adjust your share bar setting for the charts to be useable. Tick charts are affected on these days, but much less so given it is (or was) order based (not share based).
CH
The following 3 users say Thank You to CH888 for this post:
Thanks. Wonder how this will affect the Time and Sales within Tradestation. I think T&S is grabbing all data currently from tick and it's a reconstruct. Will be interesting to see how exchange data is reported as currently it is seen. Will there be overlaps...missing.... out of orders.
I use tradestation and while developing some stuff which is based on T&S. the developers could construct the full tape from tick charts if I did not have t&s running live. However at time of development I went with live data so did not dabble with t&s reconstructs from tick charts.
There were some recent thoughts I had about further development which would look into T&S but looking at this thread thought should wait as that would be looking at t&s reconstructs from tick charts.
Well will need to check and talk to developers if the reconstructed tape will be able to distinguish details I see today with proposed data changes.
Interesting thread, but there seems to be a lot of mis-understandings.
The new protocol sends thru trades using an event based model. So if an aggressor order triggers 5 other orders (i.e. buy 5 @ market triggers 5 sell orders with qty = 1), a single trade entry is sent thru. To me, this is NOT bundling, but instead is correct behavior in that a single aggressor trade filled (or partially filled). The messages for the event also contain the number of orders and the qty that was filled against each order, so that trade could be surfaced as 5 single contract trades, but while that matches todays feed behavior, it seems like that is not the way to go IMO. Would you rather know that someone bought 100 or see 100 1 car entries fly by?
Say you have a single aggressor order that triggers a bunch of orders at different prices. In this case, multiple trade entries are sent thru, one for each price. These trade entries are bundled into the same event and have the same timestamp, but should be represented as multiple trades. Again, the order fill qty is available, so it could be broken down into smaller pieces based on the matched order vs the aggressor order. Again, this is not really bundling.
There are some other fringe cases for spreads, implied trades, and misc events, but I think the above are the major ones that matter.
Another type of bundling is the bundling of multiple messages into a packet that is sent over the network. I have not seen any doc on what kind of time window is used to do this bundling, but it is required to do this in order to have an efficient transfer of data. I suspect the window is rather small so as not to affect latency.
The event based model sends thru trades marked to the nano-second, but really there is not much use for this granularity for the mere mortal, especially when your latency is measured in ms. Also, most charting platforms won't be able to record the ns anyway, so will likely be truncated back to something less granular.
Also, the aggressor is properly marked, so things like delta should be fine.
Do you have any links to show that they are bundling by individual aggressor trades?
I couldn't see that when I looked - but then I am an occasional airhead.
If you have any questions about the products or services provided, please send me a Private Message or use the futures.io "Ask Me Anything" thread
Do you have any links to show that they are bundling by individual aggressor trades?
I couldn't see that when I looked - but then I am an occasional airhead.
I think we may be mixing terms in this thread. For me at least the term "bundling" refers to a data provider artificially collecting (or grouping) several trades into a packet and then once a fixed time duration has transpired the packet is transmitted, with the objective being to reduce output bandwidth. Interactive Brokers is a good example of a data provider doing this practice. This is what we up until now have referred to as "filtered" data.
It is my understanding MDP 3.0 is generating a single trade report for each individual aggressor trade that executes. Using the Big Mike 50 lot example, Mike's 50 lot trade would produce a single trade report of 50 lots in size, and within that trade report would also provide details on the individual (1, 2, 5, 12, 14 .... lot) trades that filled the 50 lot order. So in my view, this is not bundling in the classic sense. If you are concerned about that 2 lot limit order in the book that was used to fill part of Mike's 50 lot initiating trade, then the trade report provides that in the details of the report.
Now if Interactive Brokers (or some other data providers) chooses to further "bundle" these aggressor trades to achieve additional resource economy, that is completely outside the MDP 3.0 protocol as I understand it at this time.
I think some initial confusion may have arisen when we saw the details of the trade report, and the term bundling started being used to refer to the trade report details (those 1, 2, 5... lot trades) used to fill the initiating aggressor trade.
Be Patient and Trade Smart
The following 3 users say Thank You to trendwaves for this post:
I think we may be mixing terms in this thread. For me at least the term "bundling" refers to a data provider artificially collecting (or grouping) several trades into a packet and then once a fixed time duration has transpired the packet is transmitted, with the objective being to reduce output bandwidth. Interactive Brokers is a good example of a data provider doing this practice. This is what we up until now have referred to as "filtered" data.
It is my understanding MDP 3.0 is generating a single trade report for each individual aggressor trade that executes. Using the Big Mike 50 lot example, Mike's 50 lot trade would produce a single trade report of 50 lots in size, and within that trade report would also provide details on the individual (1, 2, 5, 12, 14 .... lot) trades that filled the 50 lot order. So in my view, this is not bundling in the classic sense. If you are concerned about that 2 lot limit order in the book that was used to fill part of Mike's 50 lot initiating trade, then the trade report provides that in the details of the report.
Now if Interactive Brokers (or some other data providers) chooses to further "bundle" these aggressor trades to achieve additional resource economy, that is completely outside the MDP 3.0 protocol as I understand it at this time.
I think some initial confusion may have arisen when we saw the details of the trade report, and the term bundling started being used to refer to the trade report details (those 1, 2, 5... lot trades) used to fill the initiating aggressor trade.
understood...
"It is my understanding MDP 3.0 is generating a single trade report for each individual aggressor trade that executes" - it's this specifically that I am interested in.
I haven't seen any documentation that states that this is the case. IQFeed seem to think it's collating trades by time.
So I'm wondering which documentation lead you to your understanding.
If you have any questions about the products or services provided, please send me a Private Message or use the futures.io "Ask Me Anything" thread
The following 2 users say Thank You to Jigsaw Trading for this post:
Just page down to the first image, and you can see how a msg is laid out with the trades, and the orders. The referenced example is an aggressor order that matched to multiple book levels: buy 40 @ market, and that results in trades of 10, 20, and 10 at three different prices, and matches 5 different target orders. So, this example is a little beyond what has been mentioned in this thread in that even the aggressor order is split up (has to be due to different prices), but you should get the idea if you just think of the same case where the aggressor order was a single entry vs 3.
The CME wiki is a little tricky to navigate, and a lot of info is inferred or missing (it is a wiki vs a spec), but you can get a lot of info there.
BTW, trendwaves post is dead on.
DionysusToast
IQFeed seem to think it's collating trades by time.
Nope.
The following 6 users say Thank You to aslan for this post:
If this protocol is bundling just in the sense that they're encapsulated more data in a single packet without reducing accuracy then there shouldn't be a problem. If it also allows brokers like IB to potentially pass the data through I modified then that would be even better (although historical bid/ask would probably still be a problem).
"It is my understanding MDP 3.0 is generating a single trade report for each individual aggressor trade that executes" - it's this specifically that I am interested in.
I haven't seen any documentation that states that this is the case. IQFeed seem to think it's collating trades by time.
So I'm wondering which documentation lead you to your understanding.
The Market Data Incremental Refresh (35=X) message includes a Trade Summary message which indicates the quantity and optional inclusion of the anonymous, CME Globex-assigned order identifier.
Trade Summary Data Sequence
The Trade Summary data is the first type of message sent on the market data feed for a trade.
A Trade Summary message represents a distinct match comprised of all orders that traded together as the result of a single aggressing order, elected stop order, mass quote, or a market state event.
In my prior comments I used the term "trade report" to represent what MDP 3.0 is calling the "Trade Summary message". As Aslan points out, this message can be a single UDP packet or multiple packets in the case where a single agressor order exceeds the boundary of a single packet ...
Quoting
A single Trade Summary message can be split across multiple packets if the total number of related entries cannot be fit in a single UDP packet.
So in my mind, this tells me each unique aggressor order triggers a Trade Summary message, where the message can be one or more UDP packets as needed. For my needs and desires, I can stop here and simply use the aggressor trade information and drop the details, and in so doing should match up roughly with your "reconstructed tape".
In reading the DTN thread, it sounds like DTN wants to break apart the Trade Summary message back into the details of all the filled limit orders, those 1, 2, 5, 21 ... lot fills used to offset the single initiating aggressor order. I also noticed DTN is using the term "bundled" in reference to the Trade Summary message itself. This goes directly to my initial concern, as I stated previously, I have no use for or desire to see that detail, the value in the tape, for me, is in the aggressor order itself. I suppose in an ideal world, data providers would pass through all of the information contained in the Trade Summary message, including the true aggressor order, and let platforms like Jigsaw or NinjaTrader develop proprietary solutions for traders to use and manipulate all of the information it contains. One trader might need the fine details, another like myself, wants the initiating aggressor side trade information. I don't view one as better than the other, as they are the two parts needed to accurately and fully depict both sides of the same market event. With that said, i recognize the reality that as each data provider picks a side, the other side loses information. I am just hoping at least one data provider will pick my side... I take comfort in recognizing the cheapest and easiest solution is to transmit the single initiating aggressor order and drop the details. (The DTN thread certainly confirms that).
Be Patient and Trade Smart
The following 5 users say Thank You to trendwaves for this post:
I do not know if the new transmission protocol of the CME is better or worse, but IMHO the raw tick, meant as each single trade, is the better date to have, regardless of the time's granularity. One after can always aggregating them with the appropriate tools.
1. a 500 lot trade being reported as a 500 lot trade on T&S (pre 2009 and MDP 3.0)
2. a 500 lot trade being reported as 500 1 lot trades on T&S (2009 to Oct 2015)
I know which I'd prefer ;-)
Be aware that if your are trading through NinjaTrader brokerage with Rithmic data, Rithmic are still using the current protocol.
But if you fill your NinjaTrader charts with historical data...it's being filled with CQG/Continuum data which have moved to MDP 3.0 for both live and historical data.
Therefore in this set up if you initiate a historical data download intra day...your data will change and so will your signals.
Ouch!
mpx
The following 7 users say Thank You to mpxtreme for this post:
Will this MDP 3.0 change affect any of the print activity of the DOM or will it only be apparent on the time&sales?
Perhaps Peter from Jigsaw could kindly chime in on this one:
Just to give you a scenario.
Pre the MDP 3.0 change, A continual refreshing iceberg order is hit by a large amount of one lots that are likely to be a larger bundled trade. (The machine gun action some label it).
Now after this change will this type of action cease on CME products as the larger trade is now bundled?
How about Eurex? Any news whether they are likely to follow suit?
I wanted to chime in here that will probably be a newbie question but may shed some slight for other confused DOM traders.
Saying that I may be completely muddled on this situation!
Will this MDP 3.0 change affect any of the print activity of the DOM or will it only be apparent on the time&sales?
Pre the MDP 3.0 change, A continual refreshing iceberg order is hit by a large amount of one lots that are likely to be a larger bundled trade. (The machine gun action some label it).
Now after this change will this type of action cease on CME products as the larger trade is now bundled?
The answer to that question depends on exactly how your data provider chooses to report trades after they switch over to MDP 3.0.
Scenario 1: If your data provider, such as CQG/Continuum transmits the aggregated trade, then yes the "large amount of one lots" will no longer show up as "machine gun action" , and very well may show up as a single 600 lot trade print hitting the bid or ask.
Scenario 2: On the other hand, if your data provider, such as DTN is indicating they are considering doing, breaks apart the Trade Summary message back into all of the one lot fills from the order book then you will see no difference compared to pre-MDP 3.0. Your "machine gun action" will remain intact. But keep in mind, this 'price action' might lead you to think oh it's an iceberg being refreshed! , where it is simply just one or two large block "600 lot" market orders being filled by a large quantity of one lot limit orders in the book (as scenario 1 shows).
Looking at the CQG time and sales along side the DTN time and sales we see how the iceberg is actually a very small handful of large block market orders hitting into a price level where there are perhaps thousands of one lot orders awaiting a fill. Each 600 lot market order on the CQG side triggers hundreds of one lot 'machine gun' fills on the DTN side.
Be Patient and Trade Smart
The following 4 users say Thank You to trendwaves for this post:
The answer to that question depends on exactly how your data provider chooses to report trades after they switch over to MDP 3.0.
Scenario 1: If your data provider, such as CQG/Continuum transmits the aggregated trade, then yes the "large amount of one lots" will no longer show up as "machine gun action" , and very well may show up as a single 600 lot trade print hitting the bid or ask.
Scenario 2: On the other hand, if your data provider, such as DTN is indicating they are considering doing, breaks apart the Trade Summary message back into all of the one lot fills from the order book then you will see no difference compared to pre-MDP 3.0. Your "machine gun action" will remain intact. But keep in mind, this 'price action' might lead you to think oh it's an iceberg being refreshed! , where it is simply just one or two large block "600 lot" market orders being filled by a large quantity of one lot limit orders in the book (as scenario 1 shows).
Looking at the CQG time and sales along side the DTN time and sales we see how the iceberg is actually a very small handful of large block market orders hitting into a price level where there are perhaps thousands of one lot orders awaiting a fill. Each 600 lot market order on the CQG side triggers hundreds of one lot 'machine gun' fills on the DTN side.
Nicely put trendwaves. Makes perfect sense now.
I see how the bundled data will had an adverse effect for DOM traders monitoring order flow. The more data that is whittled down the easier it is to engage in the process and not get caught short. I guess I'll reiterate Mike's point on how Traders essentially don't take well to change. Trading has evolved exponentially since the 90's with the rise of Island/BATS etc. We simply need to adapt to the change that is always forced upon us or face extinction!
The following user says Thank You to Zefi for this post:
It would be worth noting to to add to this discussion that the SEC market structure meeting on May 13th created some headway on the hotly debated topic of an uncompetitive market place of HFT's skimming price and abusing the maker-taker process. This is long overdue with all the manipulative order types that have been present since inception of Reg NMS in 2007 including Direct Edge's 'hide not slide', NASDQ's MPPO etc.
It's worth noting that NYSE's Arca seems to be leading the way where the pilot rule changes will be implemented on their Pillar platform to test the new market structure.
QUOTE
The proposed priority categories would be:
Proposed Rule 7.36P(e)(1) would specify “Priority 1 – Market Orders,” which
provides that unexecuted Market Orders would have priority over all other sameside
orders with the same working price. This proposed priority is the same as
current Exchange priority rules under which resting Market Orders have priority
over other orders at the same price.22
Circumstances when an unexecuted Market
Order would be eligible to execute against an incoming contra-side order include
when a Market Order has exhausted all interest at the NBBO and is waiting for an
NBBO update before executing again, pursuant to Rule 7.31(a), or when a Market
Order is held unexecuted because it has reached a trading collar, pursuant to Rule
7.31(a)(3)(A). In such circumstances, the unexecuted Market Order(s) would
have priority over all other resting orders at that price.
Proposed Rule 7.36P(e)(2) would specify “Priority 2 – Display Orders.” This
proposed priority category would replace the “Display Order Process.” As
proposed, non-marketable Limit Orders with a displayed working price would
have second priority. For an order that has a display price that differs from the
22 This priority is currently specified in Rule 7.16(f)(viii).
26
working price of the order, if the working price is not displayed, the order would
not be ranked Priority 2 at the working price.
Proposed Rule 7.36P(e)(3) would specify “Priority 3 – Non-Display Orders.”
This priority category would be used in Pillar rules, rather than the “Working
Order Process.” As proposed, non-marketable Limit Orders for which the
working price is not displayed, including the reserve interest of Reserve Orders,
would have third priority.
Proposed Rule 7.36P(e)(4) would specify “Priority 4 – Tracking Orders.” This
priority category would replace the “Tracking Order Process,” as discussed in
further detail below in connection with proposed Rule 7.37P. As proposed,
Tracking Orders would have fourth priority.
Proposed Rule 7.36P(f) would set forth that within each priority category, orders would
be ranked based on time priority.
Proposed Rule 7.36P(f)(1) would provide that an order is assigned a working time
based on its original entry time, which is the time an order is first placed on the
NYSE Arca Book. This proposed process of assigning a working time to orders
is current functionality and is substantively the same as current references to the
“time of original order entry” found in several places in Rule 7.36. To provide
transparency in Exchange rules, the Exchange further proposes to include in
proposed Rule 7.36P(f) how the working time would be determined for orders
that are routed. As proposed:
27
o Proposed Rule 7.36P(f)(1)(A) would specify that an order that is fully
routed to an Away Market23 on arrival would not be assigned a working
time unless and until any unexecuted portion of the order returns to the
NYSE Arca Book. The Exchange notes that this is the current process for
assigning a working time to an order and proposes to include it in
Exchange rules to provide transparency regarding what is considered the
working time of an order that was fully routed on arrival.
o Proposed Rule 7.36P(f)(1)(B) would specify that for an order that is
partially routed to an Away Market on arrival, the portion that is not
routed would be assigned a working time. If any unexecuted portion of
the order returns to the NYSE Arca Book and joins any remaining resting
portion of the original order, the returned portion of the order would be
assigned the same working time as the resting portion of the order. If the
resting portion of the original order has already executed and any
unexecuted portion of the order returns to the NYSE Arca Book, the
returned portion of the order would be assigned a new working time. This
process for assigning a working time to partially routed orders is the same
as currently used by the Exchange. The Exchange proposes to include this
detail in Exchange rules to provide transparency regarding what is
considered the working time of an order.
23 The Exchange proposes Rule 1.1(ffP), which would define the term “Away Market.”
The proposed definition is based on the existing definition of “NOW Recipient,” which is
a term that the Exchange would not be using in Pillar. For Pillar, the proposed definition
of “Away Market” would reference the term “alternative trading system” instead of ECN.
28
Proposed Rule 7.36P(f)(2) would provide that an order would be assigned a new
working time any time the working price of an order changes. This proposed rule
text would be based on the rule text in Rule 7.36(a)(3), without any substantive
differences. A change to the working price could be because of a User’s
instruction or because the order or modifier has a price that can change based on a
reference price, such as an MPL Order, which is priced based on the PBBO.
Proposed Rule 7.36P(f)(3) would provide that an order would be assigned a new
working time if the size of the order increases and that an order would retain its
working time if the size of the order is decreased. This proposed rule text would
be based on rule text in the first and second sentences of Rule 7.36(a)(3), without
any substantive differences.
Proposed Rule 7.36P(f)(4) would provide that an order retains its working time if
the order marking is changed from: (A) sell to sell short; (B) sell to sell short
exempt; (C) sell short to sell; (D) sell short to sell short exempt; (E) sell short
exempt to sell; and (F) sell short exempt to sell short. This rule text would use for
the Pillar trading platform rules the same rule text as in Rule 7.16(f)(viii), without
any substantive differences. The Exchange proposes to include the text from Rule
7.16(f)(viii) regarding order priority when changing order marking to Rule 7.36P
to consolidate ranking in a single rule.
Proposed Rule 7.36P(g) would specify that the Exchange would enforce ranking
restrictions applicable to specified order or modifier instructions. These order and modifier
instructions would be identified in proposed new Rules 7.31P and 7.44P, which the Exchange
will submit in a rule filing prior to implementing the Pillar trading platform.
29
In addition, the Exchange proposes a definition in Rule 1.1(aP) of NYSE Arca Book that
would be applicable to the Pillar rules. The proposed definition would differ from the current
definition of NYSE Arca Book in Rule 1.1(a) in that it would not include references to the terms
“Display Order Process,” “Working Order Process,” and “Tracking Order Process,” which as
discussed above, are terms that will not be used in Pillar. As proposed, new Rule 1.1(aP) would
provide that the term “NYSE Arca Book” refers to the NYSE Arca Marketplace’s electronic file
of orders, which contains all orders entered on the NYSE Arca Marketplace.
QUOTE
It seems the CME's MDP 3.0 data changes that were announced prior to this meeting could be due to foresight knowledge on the upcoming order book/execution proposed changes. However that's just my conjectured opinion.
After CQG switched to the new CME protocol for tick data, I changed from a 2000 tick chart to a 763 tick chart. This occurred on May 3rd. Now this morning when I opened up my chart, it appears they are back to using the old protocol. (the 763 tick chart now has way too many candles). The 2000 tick chart of old seems to have returned. Does anyone know what is going on?
Nice, I see it too they unbunble it I guess. I will keep 6500v chart tho jus in case they mess it up again.
Please note that I have received the following information from CQG. This applies to CQG, Continuum, NinjaTrader Continuum, and how we will provide data from our historical data servers going forward.
"CQG implemented a newer version of MDP 3.0 this weekend. This applies to CME, NYMEX, and COMEX. (It is not implemented yet for CBOT.)
The version is MDP 3.0, not fix/fast.
However, we have ‘unbundled’ (given greater precision to filled trade sizes) to the extent possible. Basically this is true for outright fills. The exchange does not provide details for implied fills, so these are still ‘bundled’."
The following user says Thank You to cory for this post:
Platform: Sierra Chart, TOS, Tradestation, NinjaTrader
Trading: energy
Posts: 114 since Jul 2012
Thanks: 81 given,
171
received
CTS is has started migration over to the mdp feeds
Quoting
Over the next few weekends CTS will move the CME market channels to the new MDP3 price feed. If you experience any unusual activity with the price feed please contact CTS support.
It seems CQG went back to unbundled data. Does anyone know if it'll stay that way? Or if we'll have a choice? It messes me up when they switch without telling anyone.
Does anyone have an update on Tradestation's switch to this new data protocol?
It is supposed to happen this Month (August), & last I heard (from emini-watch.com), TS was considering offering two different symbols for a particular instrument; one with bundled data, & the other unbundled (data converted on TS servers, as it is right now).
TS now says that an announcement is coming at the end of this month (August); this notice will show up in your inbox at the Customer Center when you are logged in.
Supposedly the data switch is now slated for September, but they will not say what day.
They still acknowledge that Tick charts will be affected, but will not say what they will do to resolve that issue.
Emini-Watch should have a more thorough update later this month; Barry seems to have some pull with TS & gets more detailed info from them on platform & data issues.
From what I can tell I don't see any difference till date.
Or perhaps not been able to tell what has changed from the proposed changes looking into link by @Big Mike and the charts and stuff I have setup. So perhaps those need tweaks after I understand this.
Will run few things I typically run from a short term and long term perspective and have it run tomorrow during RTH to close..but so far cannot see any difference.
Thnx
CH888
TS now says that an announcement is coming at the end of this month (August); this notice will show up in your inbox at the Customer Center when you are logged in.
Supposedly the data switch is now slated for September, but they will not say what day.
They still acknowledge that Tick charts will be affected, but will not say what they will do to resolve that issue.
Emini-Watch should have a more thorough update later this month; Barry seems to have some pull with TS & gets more detailed info from them on platform & data issues.
I have been watching the data on Rithmic,Continuum and TT Net very carefully at the micro level since this was announced.I have witnessed differences in the tick data and the tick data derived signals between the 3 platforms I use.I have had to make adjustments back and forth several times.No warning from the brokers or data vendors.Some vendors posted bulletins on their websites.The best being Rithmic's statement.
After speaking to a few different people in the business of selling data this is what went on.
The switch to MDP 3.0 was at different stages beginning in March.By different stages I mean some futures contracts on the CME,Cbot,Nymex were switched to MDP 3.0 before others.You can imagine the havoc it created.Before the transition was complete many retail customers complained about the new data having less ticks and it being an "unfair and HFT beneficial" consolidated feed without really understanding what it really meant at the granular level.So what the data vendors did,confirmed by a few of them and as a further example, look at the posted bulletin a few posts up from IQFeed.
The CME sends the data bundled aggressor side...IQFeed breaks it up and sends it un-bundled/non aggressor and 3rd party charting apps or other vendors can re bundle it. Sounds like clean data?
So basically we are getting MDP 3.0 protocol feed with pre MDP 3.0 hybrid broken up data.Call it what you will.
Do we want data that is "touched" before we get it which is subject to each and every data vendors preferred way of getting that coded?I doubt very much IQfeed,Rithmic,Continuum and Tradestation coders are having lunch together to discuss how it should be done.
I heard that the HFT firms fought the new data for awhile since they played with it from 2013...I'm wondering why.
Furthermore..every time I sent an email to a broker or tech support a week apart I've received a different answer,a retracted statement or a clueless response.
In summary we are getting MDP 3.0 non aggressor side data while MDP 3.0 is suppose to be aggressor side.
The legacy data from 2009 until now has been non aggressor...
To those who were trading back prior to 2009 should know which they prefer.
IF any of the vendors,brokers,micro-structure data traders care to comment and enlighten us I am all ears.
The following 6 users say Thank You to mpxtreme for this post:
I have been watching the data on Rithmic,Continuum and TT Net very carefully at the micro level since this was announced.I have witnessed differences in the tick data and the tick data derived signals between the 3 platforms I use.I have had to make adjustments back and forth several times.No warning from the brokers or data vendors.Some vendors posted bulletins on their websites.The best being Rithmic's statement.
After speaking to a few different people in the business of selling data this is what went on.
The switch to MDP 3.0 was at different stages beginning in March.By different stages I mean some futures contracts on the CME,Cbot,Nymex were switched to MDP 3.0 before others.You can imagine the havoc it created.Before the transition was complete many retail customers complained about the new data having less ticks and it being an "unfair and HFT beneficial" consolidated feed without really understanding what it really meant at the granular level.So what the data vendors did,confirmed by a few of them and as a further example, look at the posted bulletin a few posts up from IQFeed.
The CME sends the data bundled aggressor side...IQFeed breaks it up and sends it un-bundled/non aggressor and 3rd party charting apps or other vendors can re bundle it. Sounds like clean data?
So basically we are getting MDP 3.0 protocol feed with pre MDP 3.0 hybrid broken up data.Call it what you will.
Do we want data that is "touched" before we get it which is subject to each and every data vendors preferred way of getting that coded?I doubt very much IQfeed,Rithmic,Continuum and Tradestation coders are having lunch together to discuss how it should be done.
I heard that the HFT firms fought the new data for awhile since they played with it from 2013...I'm wondering why.
Furthermore..every time I sent an email to a broker or tech support a week apart I've received a different answer,a retracted statement or a clueless response.
In summary we are getting MDP 3.0 non aggressor side data while MDP 3.0 is suppose to be aggressor side.
The legacy data from 2009 until now has been non aggressor...
To those who were trading back prior to 2009 should know which they prefer.
IF any of the vendors,brokers,micro-structure data traders care to comment and enlighten us I am all ears.
So, where are we at with this. A friend asked me what I was using and I didnt' even know. I think most people don't know. Want to hear from aggressor side users...because they are the minority. And probably the ones making the money...just speculating.
I sent an email to rithmic and they have a special protocol- Rithmic 01 mdp3 gateway. This is not their default feed.
I'm going to have to spend a lot of time re-working all my settings. But I believe it'll pay off.