NexusFi: Find Your Edge


Home Menu

 





Outside the Box and then some....


Discussion in Trading Journals

Updated
      Top Posters
    1. looks_one iantg with 66 posts (325 thanks)
    2. looks_two artemiso with 12 posts (34 thanks)
    3. looks_3 SMCJB with 12 posts (15 thanks)
    4. looks_4 pen15 with 10 posts (2 thanks)
      Best Posters
    1. looks_one iantg with 4.9 thanks per post
    2. looks_two wldman with 3.7 thanks per post
    3. looks_3 artemiso with 2.8 thanks per post
    4. looks_4 SMCJB with 1.3 thanks per post
    1. trending_up 31,592 views
    2. thumb_up 410 thanks given
    3. group 64 followers
    1. forum 137 posts
    2. attach_file 52 attachments




 
Search this Thread

Outside the Box and then some....

  #121 (permalink)
 iantg 
charlotte nc
 
Experience: Advanced
Platform: My Own System
Broker: Optimus
Trading: Emini (ES, YM, NQ, ect.)
Posts: 408 since Jan 2015
Thanks Given: 90
Thanks Received: 1,148

So one area that I have been researching recently that I can't quite get my head wrapped around is how some of these price level changes occur (bid and ask prices move up or down 1 tick).

I am attaching a data mapping I did where I overlay-ed three different levels of data.

1. Bar Update Events using Bar related data: GetCurrentBid, GetCurrentBidVolume(), etc.
2. OnMarketDepth Related events: Every change in bid or ask volumes as they occur. This is what you would see in the DOM basically.
3. OnMarketData related events: Every transaction to occur at either the bid or ask. This just keeps a transaction log of all the contracts sold.

So my question is this: It's very easy to understand and relate to price level changes that occur naturally due to volume dropping to 0 or near 0 on one side. Such as:

BidV AskV
100 200
125 225
75 300
50 275
20 300

*** Price level changes, and price moves down due to bids going down to 20 as the last published price and then the remaining 20 contracts either transact or cancel.

But what I see in the data often, and I mean 50% of the time or more is the following sequence for example.

BidV AskV
100 200
125 225
75 300
150 250
175 200

*** Price level changes. In some cases not even the side that has lower volume! What it looks like to me is that an entire side is just canceling (Like 100 + contracts). I don't see any corresponding transactions in my transaction counter from OnMarketData, yet the entire volume from the last published update just disappears and Bam!, there is a new price level.

In the enclosed spreadsheet take a look at the price changes and let me know what you think. Is this spoofing, flipping, or something else.

Technical note: (Market Replay) In NT, the bar update events lead the OnMarketDepth and OnMarketUpdate Events, so the OnBarUpdate events occur 1st, then depending on which side took the next update, any of the other 4 events may be updated next.

Ask Depth Update
Bid Depth Update
Ask Transaction Update
Bid Transaction Update

So each of these will move the prices at slightly different times. For this analysis I would just focus on the bid and ask depth updates as this will give you the most granular view of the volumes as they count down toward 0.

I could use some feedback to confirm what exactly this is.

Thanks,

Ian

In the analytical world there is no such thing as art, there is only the science you know and the science you don't know. Characterizing the science you don't know as "art" is a fools game.
Attached Files
Elite Membership required to download: ES 3 Levels Raw Data.xlsx
Visit my NexusFi Trade Journal Started this thread Reply With Quote
Thanked by:

Can you help answer these questions
from other members on NexusFi?
MC PL editor upgrade
MultiCharts
Exit Strategy
NinjaTrader
Increase in trading performance by 75%
The Elite Circle
Trade idea based off three indicators.
Traders Hideout
Better Renko Gaps
The Elite Circle
 
Best Threads (Most Thanked)
in the last 7 days on NexusFi
Just another trading journal: PA, Wyckoff & Trends
27 thanks
Tao te Trade: way of the WLD
23 thanks
Diary of a simple price action trader
22 thanks
My NQ Trading Journal
14 thanks
GFIs1 1 DAX trade per day journal
9 thanks
  #122 (permalink)
pen15
Choctaw
 
Posts: 10 since Sep 2017
Thanks Given: 12
Thanks Received: 2

I sent the following PM to Ian then realized I should post it here as well for the sake of sharing - since this thread sent me down this particular path. I'm attaching a screenshot instead of the actual spreadsheet because, whether it's useful of not, I always gain the most knowledge and possible breakthroughs from analyzing data from the ground up (i.e. building the actual equations myself based on the concepts I've gathered from others). Here's the PM:

Hey Ian,

First off, thank you for your info and responsiveness on the board. I really clicked with the way you go about looking at the market - it's been eye opening.

I didn't want to post this spreadsheet in the forum, but I was hoping I could get your thoughts/guidance on my analysis.

This is the first draft of the spreadsheet, so it doesn't have the most friendly layout yet, and I wouldn't be surprised if there are bugs I haven't found (the numbers seem to make sense though).

The blue section on the left is output data from NT8 Strategy Analyzer on single tick. I'm just using the High/Low right now, so that should work fine for now. I'm only sending some or Jan17 for now, but I have tick data for all of 2017. The time frame I captured is M-F 8:40AM-11AM CST. The product is NQ.

The green middle section contains intermediate calculations needed for the summary section on the right.

The summary section on the right contains inputs (in orange) and outputs (in gray).

My idea was to figure out on every single tick what the min/max/average range (called "volatility" on the sheet) was looking back x bars and forward y bars. The "ticks per bar" and "bars per volatility lookback/lookforward period" are all inputs in the right section of the sheet. The plan is to average these values over a year in order to determine SL and PT levels for a completely random entry system.

My theory, based on reading your thread, is that historical volatility can tell me something about future volatility. Bifurcating the historical voltility into high and low (avoiding medium for now) will hopefully reveal a correlation between the range of the last x bars and the range of the next x bars. So what I'm trying to do with this spreadsheet is get the average future volatility during high/low historical volatility conditions over a year or more for a specific time series, and then I'll set my stop higher than that number and my target somewhere around half (and SL and PT could be two different sets of values depending on high or low historical vol). Ideally this will increase the chances of the average range staying inside my SL and my PT getting hit.

So far, is this a sound theory, and does this sound like a good or bad way to go about this, or am I missing something important? I've tested this January data over a different month in NT playback and didn't get good results. I wouldn't expect to with just one month of data, but I wanted to get your thoughts before I get too far in the laborious spreadsheet building.

Thank you for reading!

Matt

P.S.
I've been staring at this spreadsheet for too long, so I'm not sure if it will make sense to anybody but me. If it doesn't make sense or if you don't want to download a massive file, feel free to just comment on my overall theory and not how I'm going about it in the spreadsheet.


Reply With Quote
Thanked by:
  #123 (permalink)
 iantg 
charlotte nc
 
Experience: Advanced
Platform: My Own System
Broker: Optimus
Trading: Emini (ES, YM, NQ, ect.)
Posts: 408 since Jan 2015
Thanks Given: 90
Thanks Received: 1,148


Matt,

Thanks for reaching out. I really respect the work you are putting in on this. I think you are on the right track. I reviewed your data, and took a stab at analyzing it myself the way that I think will help you get a jump start on this.

Just to kick this off, I used a 100 period range. (Max - Min) over 100 rows and all I could do was 1 day due to size limits posting here. From this data set I was able to observe the following.

1. Statistical distribution of the various ranges over the data. This gives us an idea about which ranges are observed most often and which are on the low volatility spectrum and which are on the high volatility spectrum. There are obviously other ways to look at this. If you use a 25 period average these numbers get a lot larger but occur less frequency in a steady sequence.

2. From this basic distribution I picked all the reasonable ranges that could work for a low volatility type of prop betting system. (I could have done high volatility as well, but I just picked Low for illustrative purposes). For each of these ranges I wrote a simple conditional formula that started an increment count when the market moves into this range and stops when the market moves out of this range. This gives us an idea of how many bars the market will stay in a given range once it pops into it. This is important because this type of prop betting system only works if the market will stay in a range long enough for a few trades to go through before the conditions change and the market moves out of this range. So here I tested a few different ranges, and then ran statistics on them. I conclude that you have a fair shot at doing prop bets within certain ranges that hold steady for a number of rows. Some of the smaller ones only hold for 10-20 rows before it breaks, so these wouldn't be as good. Ranges such as 1.0 or 1.25 hold for 50 to 70 rows before they break, and within these ranges there are many price level changes so you have a great shot at some low risk scalping strategies.


From this you could do a whole host of easy prop bets. Here are a few ideas though:

1. Random Entries / Random directions / Small Size: Here you would just randomly target small sizes under the range threshold. In this example data 1.25 or 1.0 would be the range, so you could target .25, .5, or .75 as targets and hit these all day long. For SL you could go with a 2x or 3x size and only likely hit this as the market breaks out of your range. So you could likely hit 2-4 winners while inside the range and 1 loser as the market breaks out of the range every time.

2. Mean Reversal: This is fairly easy to program: When the market pops into one of these ranges, you just bet for a reversal. So if the market moves up 2 ticks, you bet immediately that it will move down 2 ticks. If the market moves up 3 ticks, you bet it will move down 3 ticks. You are likely to hit 2-3 winners before you hit a loser, and you will hit your loser right as the market breaks out of this range. Again with a range like 1.25 you will stay here for around 50- 70 bars, so this should give you 5-10 price changes to play with.

I hope this helps.

Good luck, and thanks for running with this!


Ian




pen15 View Post
I sent the following PM to Ian then realized I should post it here as well for the sake of sharing - since this thread sent me down this particular path. I'm attaching a screenshot instead of the actual spreadsheet because, whether it's useful of not, I always gain the most knowledge and possible breakthroughs from analyzing data from the ground up (i.e. building the actual equations myself based on the concepts I've gathered from others). Here's the PM:

Hey Ian,

First off, thank you for your info and responsiveness on the board. I really clicked with the way you go about looking at the market - it's been eye opening.

I didn't want to post this spreadsheet in the forum, but I was hoping I could get your thoughts/guidance on my analysis.

This is the first draft of the spreadsheet, so it doesn't have the most friendly layout yet, and I wouldn't be surprised if there are bugs I haven't found (the numbers seem to make sense though).

The blue section on the left is output data from NT8 Strategy Analyzer on single tick. I'm just using the High/Low right now, so that should work fine for now. I'm only sending some or Jan17 for now, but I have tick data for all of 2017. The time frame I captured is M-F 8:40AM-11AM CST. The product is NQ.

The green middle section contains intermediate calculations needed for the summary section on the right.

The summary section on the right contains inputs (in orange) and outputs (in gray).

My idea was to figure out on every single tick what the min/max/average range (called "volatility" on the sheet) was looking back x bars and forward y bars. The "ticks per bar" and "bars per volatility lookback/lookforward period" are all inputs in the right section of the sheet. The plan is to average these values over a year in order to determine SL and PT levels for a completely random entry system.

My theory, based on reading your thread, is that historical volatility can tell me something about future volatility. Bifurcating the historical voltility into high and low (avoiding medium for now) will hopefully reveal a correlation between the range of the last x bars and the range of the next x bars. So what I'm trying to do with this spreadsheet is get the average future volatility during high/low historical volatility conditions over a year or more for a specific time series, and then I'll set my stop higher than that number and my target somewhere around half (and SL and PT could be two different sets of values depending on high or low historical vol). Ideally this will increase the chances of the average range staying inside my SL and my PT getting hit.

So far, is this a sound theory, and does this sound like a good or bad way to go about this, or am I missing something important? I've tested this January data over a different month in NT playback and didn't get good results. I wouldn't expect to with just one month of data, but I wanted to get your thoughts before I get too far in the laborious spreadsheet building.

Thank you for reading!

Matt

P.S.
I've been staring at this spreadsheet for too long, so I'm not sure if it will make sense to anybody but me. If it doesn't make sense or if you don't want to download a massive file, feel free to just comment on my overall theory and not how I'm going about it in the spreadsheet.



In the analytical world there is no such thing as art, there is only the science you know and the science you don't know. Characterizing the science you don't know as "art" is a fools game.
Attached Files
Elite Membership required to download: NQ Jan.zip
Visit my NexusFi Trade Journal Started this thread Reply With Quote
Thanked by:
  #124 (permalink)
pen15
Choctaw
 
Posts: 10 since Sep 2017
Thanks Given: 12
Thanks Received: 2

This is great, thank you! I'm going to dig into this and report back.


iantg View Post
Matt,

Thanks for reaching out. I really respect the work you are putting in on this. I think you are on the right track. I reviewed your data, and took a stab at analyzing it myself the way that I think will help you get a jump start on this.

Just to kick this off, I used a 100 period range. (Max - Min) over 100 rows and all I could do was 1 day due to size limits posting here. From this data set I was able to observe the following.

1. Statistical distribution of the various ranges over the data. This gives us an idea about which ranges are observed most often and which are on the low volatility spectrum and which are on the high volatility spectrum. There are obviously other ways to look at this. If you use a 25 period average these numbers get a lot larger but occur less frequency in a steady sequence.

2. From this basic distribution I picked all the reasonable ranges that could work for a low volatility type of prop betting system. (I could have done high volatility as well, but I just picked Low for illustrative purposes). For each of these ranges I wrote a simple conditional formula that started an increment count when the market moves into this range and stops when the market moves out of this range. This gives us an idea of how many bars the market will stay in a given range once it pops into it. This is important because this type of prop betting system only works if the market will stay in a range long enough for a few trades to go through before the conditions change and the market moves out of this range. So here I tested a few different ranges, and then ran statistics on them. I conclude that you have a fair shot at doing prop bets within certain ranges that hold steady for a number of rows. Some of the smaller ones only hold for 10-20 rows before it breaks, so these wouldn't be as good. Ranges such as 1.0 or 1.25 hold for 50 to 70 rows before they break, and within these ranges there are many price level changes so you have a great shot at some low risk scalping strategies.


From this you could do a whole host of easy prop bets. Here are a few ideas though:

1. Random Entries / Random directions / Small Size: Here you would just randomly target small sizes under the range threshold. In this example data 1.25 or 1.0 would be the range, so you could target .25, .5, or .75 as targets and hit these all day long. For SL you could go with a 2x or 3x size and only likely hit this as the market breaks out of your range. So you could likely hit 2-4 winners while inside the range and 1 loser as the market breaks out of the range every time.

2. Mean Reversal: This is fairly easy to program: When the market pops into one of these ranges, you just bet for a reversal. So if the market moves up 2 ticks, you bet immediately that it will move down 2 ticks. If the market moves up 3 ticks, you bet it will move down 3 ticks. You are likely to hit 2-3 winners before you hit a loser, and you will hit your loser right as the market breaks out of this range. Again with a range like 1.25 you will stay here for around 50- 70 bars, so this should give you 5-10 price changes to play with.

I hope this helps.

Good luck, and thanks for running with this!


Ian


Reply With Quote
  #125 (permalink)
 iantg 
charlotte nc
 
Experience: Advanced
Platform: My Own System
Broker: Optimus
Trading: Emini (ES, YM, NQ, ect.)
Posts: 408 since Jan 2015
Thanks Given: 90
Thanks Received: 1,148

Since there have been a few folks interested in some of my research regarding prop betting / volatility trading I figure I would add a little more color to the conversation to explain exactly how I approach this. This should be of keen interest to a few folks reading this, and anyone that appreciates statistical research. A few of you come to mind... @pen15, @sholcombe4, @jackbravo

Here is a little background information / context: https://en.wikipedia.org/wiki/Cartesian_product

So in layman's terms what this can do is allow you to run every possible permutation of something against itself. A simple example would be if you have ten numbers 1-10 and ten letters a-j. By doing a Cartesian join you would end up with 100 unique combinations of every possible number against every possible letter.

So a while back I was kind of where @pen15 was in researching volatility changes and how there are some great betting opportunities to run different PT / SL targets against different market ranges to exploit built in odds that the market gives us sometime. But after a while of doing one or two possible permutations of this, I starting researching to see if it was possible to scale this concept and see more... Like way more. And I am an old school SQL guy, and the idea of using a cross join (Properly called a Cartesian product / join) came to me in sort of a eureka moment. Instead of trying to assign 1-2 variables to test a few different setups lets say (a PT of 3 ticks vs. an SL of 10 ticks) I wanted to know how every possible permutation of every PT vs. SL setting would have performed. So I built one.

How it works: I had to set around 20 different methods to capture tick movements in different ranges. This was the tricky part. Because these had to be full moves and they had to net out to lets say 5 ticks up or 5 ticks down, but I was not looking for a single move to get me there but rather a cumulative move. So 3 ticks up followed by 1 tick down followed by 3 more ticks up = 5 full ticks up. So to do this I had to score every sequence up against every sequence down and reset the counter only when I got to a full move. I did this for 20 different tick levels capturing both up and down sequences so this was actually 40 total variables i was tracking with granular tick data in real time.

Once I had a move captured I had a corresponding variable to capture and increment the counter. So if I see a move of 5 ticks up, then my variable that holds 5 ticks up would increment by 1. So I had another 40 variables to hold all of these.

I had a third variable set that just added both the up and down moves together. So for 5 total it would add both the count of 5 longs + the count of 5 shorts. This would give me an idea of how much the market moves in a certain tick range independent of up / down fluctuations.

With all these variables and all these different statistics I ran a Cartesian join on them to analyze every possible PT / SL combination to see how they would each have played out. I ran expectancy statistics on all of them using these variables. I tested every variable as a possible PT and every variable as a possible SL against each other. In total this was around 300 different tests. The base data looked like this.

1. Tick Value PT
2. Tick Value SL
3. Frequency of Occurrence of Tick Value for PT
4. Frequency of Occurrence of Tick Value for SL

From this I ran calculation statistics on not only the average profit per unit, but also and very importantly how many trades I would have realistically obtained using this setting. So for example, a tick value of 3 vs. a tick value of 4 would get a ton of trades, but a tick value of 20 vs. a tick value of 17 trades would get very few trades due to obvious reasons. So I built in a fairly accurate algorythm to approximate how many trades I would likely get using each setting.

In the end I would take my total profit per unit * my realistic trade number - the total commission cost and get to a net total value figure for each possible PT / SL set. From here in the final step I would just sort the list highest to lowest and take the one that had the best possible outcome. And that was my bet!

I built this in visual studio before I ever moved it over to ninja trader. Here is a screen grab of it running in a web app to give you an idea. (The first image shows the top of the list, and second shows the bottom along with the row that has the highest total net value.) In this example this is the one that I bet with. (Sorry for the crudeness cosmetically, I never really intended this to be seen by anyone)





Once I had this working, I moved it over to Ninja Trader and set it lose. It worked fairly well right off the bad, but like any sort of system that uses past data to predict the future the need to determine which past data is relevant and how far back to look became a key consideration. So i tested tons of different ideas.

1. Reset once a day
2. Reset twice a day (Morning session tends to be quicker, and afternoon session tends to be slower)
3. Reset after N number of bars
4. Reset only when a significant volatility change occurs.

Everything about this process sucked. As you can imagine this was like a terrible game of whack a mole because as one method would have a gain in some respects it would lose in others. In The end there was no clear cut winner that I could really tell, so I started integrating volatility statistics with time based statistics and combining these. I felt like this was going fairly well but then eventually I just lost interest in this type of trading method and moved onto my current method.

I may eventually pick this back up one day, and it did test quite well, but since I have changed courses in my trading style a bit recently I figured it wouldn't hurt to put this out there. Also I don't expect that too many people would ever attempt something like this because it just sounds crazy. But for the few people interested in this sort of thing, here it is.

Spreadsheet enclosed with further details. Let me know your thoughts, and I may provide more details.

Happy Trading!

Ian

In the analytical world there is no such thing as art, there is only the science you know and the science you don't know. Characterizing the science you don't know as "art" is a fools game.
Attached Files
Elite Membership required to download: Cartesian Product Example.zip
Visit my NexusFi Trade Journal Started this thread Reply With Quote
  #126 (permalink)
 
jackbravo's Avatar
 jackbravo 
SF, CA/USA
 
Experience: Beginner
Platform: SC
Broker: Stage 5
Trading: NQ...uh..ES actually
Posts: 1,337 since Jun 2014
Thanks Given: 4,362
Thanks Received: 2,400

Hey Ian, thanks for sharing. Using the Cartesian products in an interesting idea to generate different variables to test. Once I figure out what I'm doing, I have this idea to generate a Bayesian-type probability after entry, that gets continuously updated with each additional bar/data point. Then to set a level to either exit or continue with the trade (e.g.30% probability of continuation = exit). I was planning to test many variables to find their effect on the likelihood of continuation in the same direction vs. reversal, such as volatility, position of OHLC in comparison to entry point, maybe even integrating cross-market data. Using Cartesian products might be a simple way of testing all those variables, so I'm going to tuck that away for later. Thanks!

"It does not matter how slowly you go, as long as you do not stop." Confucius
Reply With Quote
Thanked by:
  #127 (permalink)
pen15
Choctaw
 
Posts: 10 since Sep 2017
Thanks Given: 12
Thanks Received: 2


iantg View Post
With all these variables and all these different statistics I ran a Cartesian join on them to analyze every possible PT / SL combination to see how they would each have played out.

Thanks for sharing Ian. How did you process this Cartesian join? I've tried a similar thing on tick data with a what-if table in Excel (SL values along the rows and PT values along the columns), but it takes forever to process with a large data set. I've also tried to reach the same ends by running Excel Solver in evolutionary mode, but again it doesn't seem to be the right tool with so much data. Maybe it's time for me to migrate to MySQL.

Reply With Quote
  #128 (permalink)
 iantg 
charlotte nc
 
Experience: Advanced
Platform: My Own System
Broker: Optimus
Trading: Emini (ES, YM, NQ, ect.)
Posts: 408 since Jan 2015
Thanks Given: 90
Thanks Received: 1,148

Hi Matt,

You're right, this level of programming is not possible in excel. You would need to go with either SQL or C# to be able to do this.

In SQL this is simply just a cross join. So you take 2 views, or 2 tables, or 1 view and 1 table and join them like this.

select * from dbo.[table 1] a cross join [table 2] b on a.field1 = b.field1

In C# you have to first create two list objects and then fill the lists. This is the technical part where I loaded the results of my tick counters into the various lists. Once you have created and loaded the lists, then you use something like this:

var query = from x in firstList
from y in secondList
select new { x, y }

This joins the results of the two lists together and then you can select the fields you want. (x, y) for example.


The great thing about NT is that it runs C# which gives you the ability to extend the functionality of your code way beyond their native ninjascript objects. This is one of the more interesting examples.

Ian


pen15 View Post
Thanks for sharing Ian. How did you process this Cartesian join? I've tried a similar thing on tick data with a what-if table in Excel (SL values along the rows and PT values along the columns), but it takes forever to process with a large data set. I've also tried to reach the same ends by running Excel Solver in evolutionary mode, but again it doesn't seem to be the right tool with so much data. Maybe it's time for me to migrate to MySQL.


In the analytical world there is no such thing as art, there is only the science you know and the science you don't know. Characterizing the science you don't know as "art" is a fools game.
Visit my NexusFi Trade Journal Started this thread Reply With Quote
Thanked by:
  #129 (permalink)
 
SMCJB's Avatar
 SMCJB 
Houston TX
Legendary Market Wizard
 
Experience: Advanced
Platform: TT and Stellar
Broker: Advantage Futures
Trading: Primarily Energy but also a little Equities, Fixed Income, Metals and Crypto.
Frequency: Many times daily
Duration: Never
Posts: 5,048 since Dec 2013
Thanks Given: 4,384
Thanks Received: 10,205


iantg View Post
You would need to go with either SQL or C# to be able to do this.

You could always use Access. For somebody who already has Office but doesn't have SQL or a C# environment already installed it would be a lot easier and quicker.

Reply With Quote
Thanked by:
  #130 (permalink)
 jefforey 
edison new jersey
 
Experience: None
Platform: motivewave
Trading: ES
Posts: 69 since Nov 2016
Thanks Given: 45
Thanks Received: 18


My opinion is that for a retail trader slippage of 2 ticks per trade is a conservative figure. Yes the slippage could be more. If your trade has one entry leg and one exit leg then 2 ticks, if there are more legs than there could be more slippage. If you use limit orders only than the simulation is useless because many of your signals won't result in fills in actual trading. For one contract trade that is $25 per trade of slippage. Adding brokerage $4 (round trip) takes the total to $29 per trade. That means on an average every trade must win at least 3 ticks ($ 37.5) for the strategy to be profitable.

Reply With Quote
Thanked by:




Last Updated on June 23, 2018


© 2024 NexusFi™, s.a., All Rights Reserved.
Av Ricardo J. Alfaro, Century Tower, Panama City, Panama, Ph: +507 833-9432 (Panama and Intl), +1 888-312-3001 (USA and Canada)
All information is for educational use only and is not investment advice. There is a substantial risk of loss in trading commodity futures, stocks, options and foreign exchange products. Past performance is not indicative of future results.
About Us - Contact Us - Site Rules, Acceptable Use, and Terms and Conditions - Privacy Policy - Downloads - Top
no new posts