Hello, I'm not sure if this is the right subforum for this kind of thread, so if its not please move this thread to the correct subforum. I've tried to explain this in a nutshell, i am willing to provide more arguments on this subject if requested. After browsing the forum for a couple days, it has come to my attention that there are alot of issues regarding RWT and the use of smegas to advertise it. There have been plenty of suggestions to fix this problem (smega cooldown, keyword filter, level based system, etc..) These options would work, with the right amount of in game moderation by the GM's. But yeah the problem here is that they have to be ingame, checking the smegas, and they can only act AFTER a smega has been used. A keyword filter would be too hard to implement and probably easily circumvented by cryptic message buildup. Sentiment analysis using Deep Learning Deep learning is a relatively new technology, which uses neural networks to make decisions. You could say that a Deep Learning program has a "Brain". It's possible to use sentiment analysis, which is usually used to measure things like emotional factors and stuff in messages, to determine weather or not a message contains sentiment towards RWT. By collection a lot of data (thousands of messages), and categorizing it into 2 categories: legal, illegal we can train a neural network to understand the differences between those messages. This means that the network decides the factors that come into play weather or not a message is legal or illegal. There is no need for a human to think of any keywords. This can be used to analyse messages on the server side, before they get sent to each client. reference for sentiment analysis: https://towardsdatascience.com/sentiment-analysis-for-text-with-deep-learning-2f0a0c6472b5 The code in that reference is written in python, however python is just a wrapper for the c++ code under the hood. Its perfectly doable to write it in c++ Feel free to ask questions! but please do not clutter this thread with random discussion
Will you write the code? Or will this suggestion force our one and only active Developer to study that topic besides having a "real life"? I am not too much into programming myself, thus the question. Apart from that, good suggestion. However,do you remember the first trial of some AI, i think google google developed it, where the "Internet" (namely 4chan i think?) via Twitter to become extremely toxic and jusitfy hitlers actions, etc? Or would the legal, illegal separation be done manually by a person deciding wether or not a message is allowed?
Good questions! Will you write the code? I do have the knowledge so could write the code required for this solution if i was asked to do so. Though without a form of compensation i would prefer to limit my efforts to advice/minimal programming work. Or would the legal, illegal separation be done manually by a person deciding wether or not a message is allowed? The dataset which is used to train the neural network is created by the developer. This program would not actively learn from live input to prevent such things from happening. Message data can be grabbed from the logs, to quickly build up a dataset, a few selected volunteers could then help assigning the right category to a message. every now and then, the dataset could get expanded with new data, and the neural network could be retrained to increase the prediction accuracy. this means that unlike the google ai, the learning data for this network is controlled.
For this to work in the extent you want it to and be accurate, we'd need a much larger data sample than a few thousand messages. I don't see this being a convenient solution to the problem.
Please dont post here if you dont have anything usefull to add. To increase the accuracy, a larger data sample is indeed needed, however, transfer learning exists, where you train a model based off an existing model created for a similar problem. In this case relatively little data is required. In the case that there is no suitable pretrained model, a larger dataset would be needed. However, from the years of existance, i am sure the data is already stored. If you want to effectively filter messages, this is literally the best and most advanced way to do it right now.
Kind of a weird plug to pitch the idea then say you'd want to be paid.. lol Anyways- I don't see this working in any reasonable way. Language is vastly overly complicated and AI struggles doing simple things sometimes. I know the whole point of neural networks is they learn but as the other person said- it would require a lot of trial and error. Seems easier to just block certain words and if people feel the need to be a dick, to then just ban them. Guess it just seems odd to reinvent the wheel with more steps when Kevin could do something like finish the GM command or whatnot. Edit: to anyone curious about the neural network thing- this is a more user friendly way to get a visual understanding of what it does.
Its too much work for a single issue in the server, that, mind you, might not even work out after months of development. Kraven has enough on his plate as is, adding a pointless project that will most likely not pan out is just a waste of time. Also i need to mention that this server does not have a costume client, this means that inserting code is sometimes impossible, and that the tools the devs can work with are very limited, so this solution might be completely out of this server's scope. Finally, i think that you are delivering an overly complicated surface solution to a problem that can be solved from its roots. You are trying to deny smegas so that the rwter Crowley won't be able to advertise his product, instead of actually making his product useless. The real solution for the large scaled rwting scene is a robust auto ban system that could block hackers right away from abusing their hacks and using the money gained for rwting. And that's been worked on as we speak. So i doubt AI filters would be even needed once the auto ban system would work as intended
Seems pretty overkill a solution. Just disable/censor all smega/chat messages containing the words "RoyalsBot" or URL links and problem solved. You don't need Deep Learning to try to ban one RWTer lols.
Well, i was asked if i would write the code. Seeing as that i have no status in this community, or not even access to my account, making the entire Deep Learning solution with the server implementation without "a form of compensation" (which does not instantly mean money), would right now be out of the question for me. I understand why you would think this requires a lot of trial and error, and i must admit i do like the code bullet video's, though the technique used in your linked video is Machine Learning rather than Deep Learning. A machine learning program improves itself overtime based on previous events, A Deep Learning program is trained to perform certain actions on a controlled dataset, this means, as soon as the dataset has ben collected, its just a matter of training and tweaking the network's hyper params to get the best accuracy. With Deep Learning, the trial and error part gets simulates during the training of the network. It might look overkill, but any other suggestion is a patch to an open wound, this one actually seals it. Most of the work from this project comes from data collection, training the network is something which takes time, but no effort. I understand what you're saying, though smega's is just One use case. What if some people talk about rwt in trade, which gm will find out without a system check? Also, eventually this technique can be used to actually fix the autoban system, in a way that it can never be permanently circumvented anymore, though i don't know which data they store for every user, it might already be possible with the current data, but it might not be. You are suggesting another patch solution, but my answer to your remark is probably already included in this message. ----------------------------------------------------- Also, in order to test the effectiveness of this filter, it does not have to be deployed ingame, after training, a desktop/web app can be used for users to interact with the network to see if their message contains any illegal sentiment. If the community helps with collecting message data, i can train a neural network to demonstrate.
Just a thought, but what if you are Crowley, trying to get info on which data is recorded and stored on the server to improve the way you would write the code for your own hacks, so it's even less likely to be detected? After all Crowley makes a living from RWTing, might not be too much effort to infiltrate the servers Devs department haha. Not forcing your way to "offer help writing the code", but rather being passive, still secretly hoping to gain that opportunity. Again, just a thought.
Haha, I'am not crowley. Not every programmer with a set of brains has bad intentions. I simply stated that i don't know what data is stored, so i can't make an estimate as to how much work it would take to use Deep Learning as a second layer anti-cheat (or anti out of order behavior) system. For now, this thread is about an effective smega filter (which can be used for normal chat and trading too)
For instance osrs uses Deep Learning heuristic analysis to determine if a player is botting, to this date, no one has fully beaten it. I could setup an example filter for you to play around with if i am given enough message data.
I never said it was deep learning- I said it was an example of neural networks- doesn't really matter you're sort of splitting hairs as by your own admission you'd have to train the program for accuracy... you have no way of saying how easy that will be or how quick it will be. lul There are thousands of bots. Sure most get caught pretty easily but there were some in the top 100 in skills who were caught botting and it begs the question how many thousands of hours they did get by, and the whole thing with bots Granted- botting detection is a completely different beast than catching bad behavior in written word. I don't think the two are comparable at all, even if in a vacuum they seem the same.
An example of neural networks demonstrated in a way which is completely irrelevant to this project, you were talking about trial and error (probably cus you saw those cubes jumping around in the video). The training process for this program is completely different. (And the training process is automated, so it doesn't matter how long that takes), the code bullet video's are fun. But please don't use that as technical reference. and yes, bot detection and chat filter are two completely different things, as i've stated before, the one uses Deep Learning sentiment analysis, the other uses Deep Learning heuristic analysis. Its all said in my posts above. Just because it contains Deep Learning, it doesn't mean its the same.