The fundamental idea of business-to-business CRM is often described as allowing the larger business to be as responsive to the requirements of its customer as a small business. In the early days of CRM this became translated from “responsive” to “reactive”. Effective larger businesses acknowledge that they need to be pro-active to find [hearing] the views, concerns, needs and levels of satisfaction from their customers. Paper-based surveys, such as those left in hotel bedrooms, tend to have a low response rate and are usually completed by customers who have a grievance. Telephone-based interviews are often affected by the Cassandra phenomenon. Face-to-face interviews are costly and can be led by the interviewer.
A large, international hotel chain wished to get more business travellers. They made a decision to conduct a customer satisfaction survey to discover whatever they necessary to improve their services for this kind of guest. A written survey was positioned in each room and guests were required to fill it out. However, when the survey period was complete, your accommodation discovered that the only real people who had completed the surveys were children and their grandparents!
A large manufacturing company conducted the initial year of the things was made to get Customer satisfaction survey. The first year, the satisfaction score was 94%. The second year, with the same basic survey topics, but using another survey vendor, the satisfaction score dropped to 64%. Ironically, at the same time, their overall revenues doubled!
The questions were simpler and phrased differently. The transaction from the questions was different. The format of the survey was different. The targeted respondents were in a different management level. The General Satisfaction question was placed at the conclusion of the survey.
Although all customer care surveys are used for gathering peoples’ opinions, survey designs vary dramatically long, content and format. Analysis techniques may utilize numerous charts, graphs and narrative interpretations. Companies often make use of a survey to evaluate their business strategies, and several base their entire business plan upon their survey’s results. BUT…troubling questions often emerge.
Would be the results always accurate? …Sometimes accurate? …In any way accurate? Exist “hidden pockets of customer discontent” that a survey overlooks? Can the survey information be trusted enough to take major action with confidence?
As the examples above show, different survey designs, methodologies and population characteristics will dramatically alter the results of a survey. Therefore, it behoves an organization to make absolutely certain that their survey process is accurate enough to generate a true representation of the customers’ opinions. Failing to do so, there is absolutely no way the business may use the outcomes for precise action planning.
The characteristics of a survey’s design, as well as the data collection methodologies employed to conduct the survey, require careful forethought to make sure comprehensive, accurate, and correct results. The discussion on the next page summarizes several key “rules of thumb” that must definitely be adhered to if a survey is to become a company’s most valued strategic business tool.
Survey questions needs to be categorized into three types: Overall Satisfaction question – “How satisfied are you overall with XYZ Company?” Key Attributes – satisfaction with key regions of business, e.g. Sales, Marketing, Operations, etc. Drill Down – satisfaction with issues that are unique to each and every attribute, and upon which action could be come to directly remedy that Key Attribute’s issues.
The Overall Satisfaction question is placed after the survey to ensure that its answer will be afflicted with a far more comprehensive thinking, allowing respondents to have first considered solutions to other questions. Market research, if constructed properly, will yield an abundance of information. These design elements should be taken into account: First, the survey should be kept to a reasonable length. Over 60 questions in a written survey can become tiring. Anything over 8-12 questions begins taxing mdycyz patience of participants in a phone survey.
Second, the questions should utilize simple sentences with short words. Third, questions should ask for an opinion on just one topic at any given time. For instance, the question, “how satisfied have you been with our services and products?” cannot be effectively answered since a respondent might have conflicting opinions on products versus services.
Fourth, superlatives such as “excellent” or “very” really should not be utilized in questions. Such words often lead a respondent toward an opinion.
Fifth, “feel great” questions yield subjective answers where little specific action may be taken. For instance, the question “how do you feel about XYZ company’s industry position?” produces responses that are of no practical value with regards to improving a surgical procedure.
Even though the fill-in-the-dots format is among the most common varieties of survey, you can find significant flaws, which can discredit the outcomes. As an example, all prior answers are visible, which results in comparisons with current questions, undermining candour. Second, some respondents subconsciously tend to find symmetry within their responses and become guided by the pattern of their responses, not their true feelings. Third, because paper surveys are usually categorized into topic sections, a respondent is a lot more likely to fill down a column of dots within a category while giving little consideration to every question. Some INTERNET surveys, constructed inside the same “dots” format, often lead to the same tendencies, particularly if inconvenient sideways scrolling is essential to reply to a question.
In a survey conducted by Xerox Corporation, over one third of all responses were discarded since the participants had clearly run along the columns in each category rather than carefully considering each question.
TELEPHONE SURVEYS Though a telephone survey yields a more accurate response than a paper survey, they could also have inherent flaws that impede quality results, such as:
First, whenever a respondent’s identity is clearly known, concern over the potential of being challenged or confronted with negative responses later on creates a strong positive bias inside their replies (the so-called “Cassandra Phenomenon”.)
Second, research indicates that folks become friendlier as a conversation grows longer, thus influencing question responses.
Third, human nature states that people like to be liked. Therefore, gender biases, accents, perceived intelligence, or compassion all influence responses. Similarly, senior management egos often emerge when attemping to convey their wisdom.
Fourth, telephone surveys are intrusive on the senior manager’s time. An unannounced call may create a primary negative impression of the survey. Many respondents could be partially focused on the clock as opposed to the questions. Optimum responses are dependent upon a respondents’ clear mind and free time, a couple of things that senior management often lacks. In a recent multi-national survey where targeted respondents were offered the choice of a phone or some other methods, ALL select the other methods.
Taking precautionary steps, like keeping the survey brief and using only highly-trained callers who minimize idle conversation, may help minimize the previously mentioned issues, and definitely will not eliminate them.
The goal of the survey would be to capture a representative cross-part of opinions throughout a team of people. Unfortunately, unless most the folks participate, two factors will influence the outcomes:
First, negative people often answer market research more often than positive because human nature encourages “venting” negative emotions. A low response rate will usually produce more negative results (see drawing).
Second, a lesser percentage of a population is less representative of the complete. For example, if 12 people are asked to have a survey and 25% respond, then your opinions of the other nine people are unknown and might be entirely different. However, if 75% respond, then only three opinions are unknown. The other nine will be more very likely to represent the opinions in the whole group. One can assume that the greater the response rate, the greater accurate the snap-shot of opinions.
Totally Satisfied vs. Very Satisfied ……Debates have raged on the scales used to depict amounts of client satisfaction. Lately, however, research has definitively proven which a “totally satisfied” customer is between 3 and ten times very likely to initiate a repurchase, and that measuring this “top-box” category is significantly more precise than some other means. Moreover, surveys which measure percentages of “totally satisfied” customers instead of the traditional sum of “very satisfied” and “somewhat satisfied,” provide an infinitely more accurate indicator of business growth.
Other Scale issues…..There are more rules of thumb that are often used to ensure more valuable results:
Many surveys give you a “neutral” choice over a five-point scale for people who might not wish to answer a matter, or for those who are unable to make a decision. This “bail-out” option decreases the amount of opinions, thus diminishing the survey’s validity. Surveys which use “insufficient information,” as a more definitive middle-box choice persuade a respondent to create a decision, unless they merely have not enough knowledge to respond to the question.
Scales of 1-10 (or 1-100%) are perceived differently between age brackets. Individuals who were schooled utilizing a percentage grading system often think about a 59% to be “flunking.” These deep-rooted tendencies often skew different peoples’ perceptions of survey results.
There are several additional details that can improve the overall polish of any survey. While market research ought to be an exercise in communications excellence, the event of getting a survey also need to be positive for the respondent, along with valuable for the survey sponsor.
First, People – Those responsible for acting upon issues revealed in the survey needs to be fully engaged in the survey development process. A “team leader” should be accountable for making sure all pertinent business categories are included (approximately 10 is good), and that designated individuals assume responsibilty for answering the final results for each and every Key Attribute.
Second, Respondent Validation – After the names of potential survey respondents have been selected, they are individually called and “invited” to sign up. This task ensures the person is willing to accept survey, and elicits a binding agreement to do so, thus enhancing the response rate. Additionally, it ensures the person’s name, title, and address are correct, an area by which inaccuracies are commonplace.
Third, Questions – Open-ended questions are usually best avoided in favour of simple, concise, one subject questions. The questions also need to be randomised, mixing up the topics, forcing the respondent to become continually thinking of a different subject, rather than building upon a solution from the previous question. Finally, questions should be presented in positive tones, which not just helps maintain an objective and uniform attitude while answering the survey questions, but allows for uniform interpretation from the results.
Fourth, Results – Each respondent gets a synopsis of the survey results, in a choice of writing or – preferably – personally. By giving on the outset to talk about the results from the survey with every respondent, interest is generated during this process, the response rate increases, and the clients are left with a standing invitation to return to the customer later and close the communication loop. Not only does that provide a method of dealing and exploring identified issues on a personal level, however it often increases an individual’s willingness to sign up in later surveys.
A well structured customer satisfaction survey provides an abundance of invaluable market intelligence that human nature will not otherwise allow use of. Properly done, it could be a means of establishing performance benchmarks, measuring improvement as time passes, building individual customer relationships, identifying customers at risk of loss, and improving overall customer satisfaction, loyalty and revenues. In case a clients are not careful, however, it can become a supply of misguided direction, wrong decisions and wasted money.