Introduction Back in the day, when I played EMS, I was under the impression that accuracy decreases the chance of hitting numbers in the lower part of your range and I've had this suspicion ever since. I wasn't able to find any proof for, or against this on the internet so I decided to test it myself. EDIT: some clarification: I do know the damage formulas for min-damage and max-damage, and I know that accuracy does not affect your min or max hit. However, it would have been possible that accuracy decreased how often you hit close to your min-damage. Also, I knew that accuracy decreases the chance to miss, I just thought that it might ALSO improve your average damage by hitting your min-hit less often. Also, I would agree that the third test is a bit excessive, but I enjoyed doing it, so it's all good. Setup Since I expected a more stable damage, I thought it would be appropriate to conduct this test on a character with low accuracy. I decided to use a warrior wearing a dagger (no mastery to increase the instability of the hits, in the hope to see it become more stable with accuracy) I conducted this test at jr. grupins, for no particular reason other than that I needed a map with only one type of mob. I hit 100 times with 57 accuracy, and 100 times with 91 accuracy and wrote down every hit. The skill I used to hit the mobs was power strike. In the test with 57 accuracy, I wore a 9 dex cape, no hat and no eye accessory. In the test with 91 accuracy, I wore a 5 dex cape, a 4 dex, 14 accuracy hat, a 10 accuracy white raccoon mask, and used a 10 accuracy potion. This way, no stats other than accuracy were affected. Results The average of the 100 hits with 57 accuracy was 1567 The average of the 100 hits with 91 accuracy was 1474 This is a difference of nearly 100, in favor of less accuracy. In order to test if this difference is statistically significant, I decided to conduct a statistical test at 95% confidence level. The null-hypothesis is that the average of the hits with 57 accuracy is equal to the average of the hits with 91 accuracy The alternative hypothesis is that the average of the hits with 57 accuracy is greater than the average of the hits with 91 accuracy The test is conducted at α = 5% The variance of the hits with 57 accuracy is 469118 while the variance of the hits with 91 accuracy is 563170. In order to compare the means, we have to see if we can assume that the variances are equal. I used the F.INV function in excel to determine the rejection region for the assumption of equal variances, which for a 90% confidence interval is 1.295. Dividing the variances of the two samples gives 563170/469118 = 1.200. This does not lie the rejection region of 1.295, so at 90% significance, we cannot prove that the variances are different. This does not mean that we have proven the variances to be equal, but we can now make a more educated assumption that the variances are equal. Since the sample sizes are equal, we can simple add the variances and divide by two to get an average variance: (469118+563170)/2 = 516144 Now, to test the differences in means we can use the test statistic . Filling this in gives: I used the T.INV function in excel to determine the rejection region for α = 5% which gave 1.653. The t value of 0.915 does not lie in the rejection region of 1.653, so at 95% confidence level, we cannot reject the null-hypothesis, which means that there is insufficient evidence to conclude that the hits with 57 accuracy were on average higher than the hits with 91 accuracy. Improved setup After conducting this test, I realized that doing this test with a weapon with which the character has no mastery only increases the variance and decreases the power of the test. Therefore I thought that characters with a lot of accuracy already would give more precise results. For this reason, I did another test using a hermit. This time, I only used basic attacks, and I only counted the non-critical hits, to further stabilize the results. I tested 400 hits per sample this time, to increase the power of the test. I used the same equips as last time to make the difference in accuracy between the two samples. The accuracy in one sample is 200, and the accuracy in the other sample is 234. Results The average of the 400 hits with 200 accuracy was 1096 The average of the 400 hits with 234 accuracy was 1078 Again, the sample with less accuracy has the higher average, I repeated the same steps as last time. The null-hypothesis is that the average of the hits with 200 accuracy is equal to the average of the hits with 234 accuracy The alternative hypothesis is that the average of the hits with 200 accuracy is greater than the average of the hits with 234 accuracy The test is conducted at α = 5% The variance of the hits with 200 accuracy is 33779, while the variance of the hits with 234 accuracy is 36030. Dividing them gives 36030/33779 = 1.067. Using excel's F.INV function again at α = 10%, we get a rejection value of 1.137. Like last time, there is insufficient evidence that the means are different, since 1.067 is not in the rejection region of 1.137. We will assume that variances are the same, and calculate their average: (33779+36030)/2 = 34905 Using the same test statistic: , we find . Using excel's T.INV function again, at α = 5%, we find a rejection region of 1.647. This time, the t value was closer to the rejection region, but not close enough to reject the null-hypothesis. This means that again, there is insufficient evidence to conclude that the hits with 200 accuracy were on average higher than the hits with 234 accuracy. Improved setup After doing this test I realized that if accuracy indeed affects damage, then the difference of 234 and 200 is only marginal. So I needed a class with mastery, but without a lot of accuracy. I also wasn't 100% sure if leaving out the critical hits in the previous test was justified, so I decided to do one more test. To settle this question once and for all I chose a sample size of 800, and I used a brawler with mastery. Like last time, I used basic attacks. I used the same equips as last time to make the difference in accuracy between the two samples. The accuracy in one sample is 64, and the accuracy in the other sample is 98. Results The average of the 800 hits with 64 accuracy was 341 The average of the 800 hits with 98 accuracy was 344 This time, the higher accuracy had a higher average hit, but it should be noted that these results are incredibly close, which is to be expected with such a large sample size, if there is indeed no difference, but let's test it anyway: We will assume equal variances, since dividing them gave 4105/4035 = 1.017 which is not in the rejection region of 1.095 that I got using excel's F.INF function. The average variance is (4105+4035)/2 = 4070. Using the same test statistic: , we find , which is not in the rejection region of 1.646. Conclusion The final conclusion then, and the TLDR (I don't blame you) is that at 95% confidence, we have not found enough evidence to conclude that accuracy affects damage. If a real difference exists, then it must be very marginal and highly insignificant. So, don't invest in accuracy equips/scrolls to increase damage.
Good math, you're right; the damage range equation does not include the accuracy stat. I'm not sure why you felt the need to do a statistical analysis on it when the equations are common knowledge, but I suppose it's good practice.
Yes, accuracy decreases your miss-rate. I know the formulas for min and max damage but it would've been possible that accuracy affects how often you hit closer to your minrange or closer to your maxrange.
LOL, I remember this one guy sitting infront of KPQ showing off his equips, he showed me a 16 accuracy sad mask and were bragging that it was the highest accuracy on royals. I told him accuracy doesn't affect dmg so why invest so much in it? he got mad and started calling me a noob and a idiot. glad someone debunked it even tho it was kinda "common knowledge".
That depends on your level. Also you wouldn't want to know how much accuracy you need to be able to hit zakum or HT, because then you would only hit it less than 1% of the time. A more interesting number would be the required accuracy to hit it 99% or 100% of the time. There are accuracy calculators available on the internet that help you calculate these numbers. http://screamingstatue.com/accuracymain.php
Narrowing the bellcurve to closer to the center does not affect the center, just how likely you'll be close to it, so in theory, the average would be the same even if that statement was true, would it not? 1,2,3,4,5,6,7,8,9 4,4,4,5,5,5,6,6,6 Both sets average 5.
I think what chris's hypothesis mean is for having the same min and max damage but accuracy affects how often you hit the lower or higher part i.e.: The figures below are just to show example for my elaboration 1,2,3,4,5,6,7,8,9 (moderate accuracy) 1,1,1,1,2,2,2,3,3,4,5,6,7,8,9 (lower accuracy) 1,2,3,4,5,6,7,7,8,8,8,9,9,9,9 (higher accuracy) in which higher accuracy has higher average But chris's sampling test shows accuracy has no effect at 95% confidence, meaning it will all be 1,2,3,4,5,6,7,8,9 for lower, moderate, and higher accuracy
This is indeed what my hypothesis was. It is also possible however, that what @Michael said is the case, which would indeed mean that the mean stays the same regardless of accuracy. Here are the QQ plots for a uniform distribution for the third test I did (64 acc vs 98 acc): The higher accuracy does seem to follow the uniform distribution more strictly. Here are the fluctuations: The lower accuracy is mostly lower (from 2 to -10) than expected, while the higher accuracy is more around the expected value (from 4 to -4) I don't know if these differences are significant, but it doesn't look like it. I don't know how to test this though. Something I should probably mention is the medians of all of the tests that I did: Test 1 57 acc: 1572 91 acc: 1480 Test 2 200 acc: 1097 234 acc: 1094 Test 3 64 acc: 341 98 acc: 348
Ah, I see what he aimed to prove now, thanks. Again though, the damage range formula doesn't include accuracy as a stat - in order for the phenomenon you're describing to actually occur, I'd assume you'd need some sort of exponent including accuracy in the formula, which we know doesn't exist. Unless I'm wrong - would there be another mathematical manner in which this sort of thing would occur? Or are you thinking that there may be more at play in the codes resulting in damage numbers other than just the damage range formula? You seem to have a stronger grasp on mathematics than my six-years-outdated entry level university math, so I'm still interested in what you were going for by doing this in the first place.
The formulas for min and max damage are indeed very well known: MAX = (Primary Stat + Secondary Stat) * Weapon Attack / 100 MIN = (Primary Stat * 0.9 * Skill Mastery + Secondary Stat) * Weapon Attack / 100 However, there are different ways one could determine how often you hit high and how often you hit low. The damage can be uniformly distributed (all hits occur equally commonly), or normally distributed (your average hit occurs way more often than your min or max hit). While I though that a uniform distribution was the most likely, I wanted to see if accuracy changed the skewness of the distribution more to the left. (Lower hits occur less frequently, and higher hits occur more frequently. All hits would still be within the min and max range though). From the QQ-plot I posted though, it seems that Maple uses a uniform distribution, which in code would simply be 'damage = random(min,max)' though, which makes sense for a 13 year old game. Spoiler: uniform distribution In this case your minRange would be 1 and your maxRange would be 5. Every value is equally likely to occur.
Thank you Chrizz for the marvelous in-depth testing for the correlation between accuracy and damage numbers. Very good use of mathematical formulas and statistical graphs in this guide!