Thursday, June 18, 2015
Thoughts on Beta Testing
The Beta 24 Setup
I wanted only 24 Beta Testers so that I could manage the incoming feedback and not be swamped, and use the experience to find the most efficient ways of storing and managing the incoming information. This way I can learn how to efficiently categorize and prioritize the feedback and make the process as useful as possible.
To set up the Beta 24 I created a Google+ Private Community called Beta 24. I created a 12 Weekly Goals program that would take about 1 hour a week to complete. I provided a document on how people can communicate, with instructions for those who may not be familiar with Google+. The communications channels are:
1. The Private Beta 24 Google+ Community
2. Google Hangouts
3. Mythos Machine Feedback Button
4. Elthos Website Private Beta 24 Forum
5. Direct Email to me.
I got 24 people to sign up. In the first week 18 of 24 people filled out the survey and then I created a login for the web application and forum for them and sent it by email (this method was recommended by my marketing guru). I created SurveyMonkey surveys of 10 questions or less, one for each week's goals asking basic questions and for any recommendations based on the goals for that week.
The weekly goals:
Week 1 - Download the Rules PDF, reading it, and taking the survey.
Week 2 - Use the rules to generate a few Characters and run a few simple Combats
Week 3 - Go to the Mythos Machine and Create a World (video tutorial provided along with detailed instructions)
I also let on that I would be providing Rewards for feedback - either in the form of thank you artwork, or through the accumulation of Awesome Points which would be redeemable in free service at a rate of 1 month per Awesome Point, though this idea is more intimated than clearly defined as I'm still working out the best way to do that. Note: I kind of rushed into the Beta to try to beat the Con Season... not necessarily the best idea, but I also feel I need to push forward and this is an effort do "get going" even if it isn't perfect. So not all the details have been worked out - but I consider this Beta as a Beta of my Beta Testing process, so to say. As I have very few Beta Testers I am hoping that the "working this out" part wouldn't be too disruptive... though in retrospect, I think this is possibly another contributing factor to the lack of responsiveness. The more organized and professional a process is, the more likely it is to succeed. But we all have to start somewhere, of course, and this is my first crack at anything like this.
The Beta 24 has a Start Date and an End Date 12 weeks later. It's a structured approach with Weekly Goals that also serve as a Tutorial process through the Rules System as well as the Mythos Machine.
The Beta 24 Results
The survey responses, when they came, were actually quite useful and interesting. To date, no one has responded on the Forum, nor even logged in to look at it. The Google+ Community has gotten a tiny smattering of input from the Beta Testers, and that has been mostly comments like "I'm not sure how to find your site... I can't find the original email". That point is interesting as it illustrates a core problem - when I sent out my communications, regardless of the medium (email, forum, or via Google+ Community) if they didn't respond right away, then the communication vanished into the miasma and is not longer findable. Naturally, with emails, people simply delete them. With the Forum, no one logs in (I suspect that is because logging in to something represents a hurdle, and no one wants to bother with a hurdle). The Community has the problem that Google designed it to be Stream-Of-Consciousness... and so stuff gets lost in the miasma by design. It's a poor medium by which to try to store organized information.
At the end of Week 3 I postponed the announcement for Week 4, because by Week 2 it seemed I only had a trickle of responses (5 out of 24), and Week 3 resulted in 0 responses. To the left are the results thus far. Two of the responders, by the way, are insider friends of the project. I also know that the Beta Testers downloaded the rules book, and when I posted that I had upgraded the rules, pretty much everyone downloaded the upgrades. So they are listening... but not commenting or providing feedback, which I find very interesting and is actually a significant piece of feedback in itself, not about Elthos RPG, but about the Beta Test. Please note - this is an observation, not complaint. I hold myself responsible for the outcome of the Beta process.
From a responses perspective this has been somewhat disappointing, if not completely unexpected. I found from my queries to the community that generally speaking, Beta Tests often have these kinds of results.
That said, the positive news is that I've learned a lot from my experience with Beta 24. I can see there are a number of hurdles that I need to overcome to get a good solid beta going. This is what I'm concluding.
1) All messaging needs to be short, relevant and to the point.
2) Having too much structure hampers the process. The time frame and weekly goals creates too much overhead, and an impression of too much work.
3) Google Communities is fine for informal chatting.
4) The Forum is not that useful.
5) Surveys have worked pretty well.
6) Any reward system needs to be clear and simple.
So as far as learning important lessons from Beta 24 goes, it's been a big success. I'm gong to work on all of that, and I think I will soon transform the Beta 24 into a Rolling Open Beta which will resolve these issues.
Open Rolling Beta
The Rolling Open Beta idea is to allow people to come in as Beta Testers any time by creating a Beta Test Account, with "Beta_" in the name. So "Beta_JohnDoe" would work. With this they will be directed to the Instructions for the Rolling Beta, which is the Weekly Goal framework, with video tutorials on the Mythos Machine goals. The entire system would work as a tutorial for the Mythos Machine, while soliciting feedback in exchange for Awesome Points. The Weekly Goals would be renamed "Part 1", "Part 2", etc, so that there is no time pressure. People can go through the Beta-Tutorial program as quickly or slowly as they wish, and use it either as a Tutorial system, and / or Beta Test where they can gain Free Service (or other possible goodies, tbd) in the form of Awesome Points. The question that I have about this model is ... what is the difference between coming in as a Beta Tester, and just being an ordinary user? I mean if I'm assigning Awesome Points for feedback, why not allow ordinary users to also get Awesome Points for feedback? Well, the reason why is because everyone would then simply write in any feedback and get Free Service... so that wouldn't work. A Beta Tester has a Beta Account... and that's specific to the Beta Test... hmmm... well, as you can see - I'm still in the process of brainstorming on this idea. The details need to be worked out.
The more I think about it the more it seems that the problem at its root is that people do not want to deal with anything complicated, and they want to get things for free. This of course runs against the need of the game designer to get relevant feedback.
Ultimately, the simplest form of feedback is whether or not people use your product. If they use it, that's positive feedback. Most people will not say one way or another if they like or don't like a product. They have no obligation to do so, and most people resent being asked to do so without some sort of reward. They will, however, be likely to complain if something is broken.
Given this - I'm almost persuading myself that the effort going into trying to set up and conduct Beta Tests is actually something of a waste of time and effort, and has a tendency to be disappointing (as I've heard from other designers who I have queried about this). How much of the information we get back is actually all that useful? Maybe not much.
Perhaps the better way to go is simply put the product out there and offer a feedback mechanism that lets people, should they wish to do so, offer feedback. The true feedback in fact is the simple binary... are people using the product?
And for those who use it, if they like it they probably won't comment. If they don't like something, or want to see an enhancement, or found a bug, they may well comment. The reason why... because if they are using the thing, it's because they find it compelling enough to use, and if they find a flaw they probably would like it even better if it's fixed. Hence they are fairly likely to provide feedback at that point.
I'm not sure if that all makes sense or not. The question is the Effort / Benefit ratio. There is obviously benefit to getting feedback. But how that feedback comes in, and whether or not I attempt to provide an orgainzed process for collecting feedback, or let feedback simply come in as it may ... That is the question.
What do you think?