## A community for students. Sign up today

Here's the question you clicked on:

## anonymous 4 years ago SmartScore -- An Explanation [Please read if you have any initial questions, then feel free to ask more!] @Laura* @cshalvey @shadowfiend @mattfeury @farmdawgnation @chris @darthsid chime on in

• This Question is Closed
1. anonymous

YAYYY!!!

2. anonymous

Your SmartScore is a new concept that attempts to reflect the effort and ability you apply not only to learning, but also, to helping and to being supportive of fellow people who share a desire to share knowledge. We analyze the actions and behaviors of all users on the site. This analysis is meant to capture the things that grades cannot such as your ability to collaborate with other people, your willingness to be a compassionate helper of a person in need, or your natural urge to ask questions in order to understand new concepts. The score itself is determined relative to the scores of other users. The variables, categories, and mathematics behind the score will always be improved and altered as we add new features to the site. Your score may change due to these improvements, but ultimately you have the most power to change and to improve your own score simply by using OpenStudy to study with your peers and to aid in the effort to make the world one big studygroup.

3. anonymous

You’ll notice in the new profile pages there is a graph to display the progress of your SmartScore for the past few weeks. This represents your overall SmartScore which is currently comprised of your scores in the three categories we’ve decided to focus upon (for now): Teamwork - Which focuses upon your willingness and aptitude to communicate and interact with others in group settings, very small groups like those that are formed within a question and large groups like all users studying mathematics. You interact in chat, but your actions within questions -- asking, answering, helping, guiding are considered more heavily as they are core the principles of the site Problem Solving - The ability to identify a situation/question to which you are willing to exert your own effort to construct a coherent, thoughtful, and respectfully toned solution to the benefit of another and even yourself. Engagement - which focuses primarily upon the actions and effort you exert in your own questions that you have asked because you want to learn a new concept and your willingness to interact with people who are helping you. You’re surely wondering what each variable actually means in the category boxes. Below are the descriptions. Keep in mind, these are not all the variables that we are analyzing. There’s a lot more to the score, and we’ll keep you informed about the progress.

4. anonymous

Problem Solving - Questions Answered - These are the questions that other people have asked and you decided to answer. - Medals for Answers - These are the medals you’ve received for helping someone in need, specifically someone who asked a question. You took the time to help the asker understand a new concept, and the asker thanked you with a medal.

5. anonymous

Teamwork - Fans These are the people who think you’re awesome at helping others learn hard concepts or at asking good questions or even just at being friendly and telling a good joke. - Testimonials These are written by the people who really think you’re awesome, and they just had to tell you. - People Helped There are the people whose questions you’ve answered when they really needed help, you signed-on, saw their question and swooped in with a solution. Primarily, it's unique, meaning once per user.

6. anonymous

Engagement - Medals for Asking These are the medals other people have given you because you were being involved and collaborative in the questions that you ask, really showing a desire to learn. - Days Studied This tracks the days you’ve been on OpenStudy, either helping or being helped.

7. anonymous

woowww..very thorough :DD

8. anonymous

Some FAQs: 1. If I have a SmartScore of 50, is that bad? The score itself may be out of 0 to 100, but don’t get stuck in the old ways like thinking that a 50 as SmartScore is like receiving a 50% on a test in Biology. The score is a measure of your efforts on the site relative to those of your peers. Actually, a 50 SmartScore is a good score in our system! Be proud and share it, and of course, improve it! 2. How do I improve my SmartScore? You improve your score by participating on OpenStudy by helping others, asking questions, and supporting the open learning-based community you’ve helped to create. We haven’t listed all the metrics and variables that we use to determine the score, but some of the major variables are listed in your profile and explained (see above). 3. There are differences between the numbers on my profile, what is up with that? The score and variables and queries and analysis we use will always be changing. Discrepancies are thoroughly examined. Please inform us of any discrepancies that you find. If you have any feedback related to SmartScore please either post a question in the OpenStudy Feedback section (perhaps this question/thread). We’d love your feedback!

9. ujjwal

what is invited users? I saw the term in saifoo's profile.

10. anonymous

maybe the ones he has recruited?

11. anonymous

You can share you questions and SmartScore now, primarily you share these on social media, this sharing can be seen as an invitation. This then refers to people whom you invited and who joined OpenStudy.

12. anonymous

ahhh exciting >:)))

13. ujjwal

i see.. thanks for the info! I am going to share my smartscore on fb.

14. anonymous

these administrators never cease to surprise us lol

15. anonymous

"This analysis is meant to capture the things that grades cannot such as your ability to collaborate with other people, your willingness..." It's not that Universities are going to ask my smart score on OpenSudy, is it? So, why did you create it?

16. anonymous

@Ishaan94 This is a new approach, and with a new approach, adoption takes time. We created it because we felt that it was a necessary to help accurately represent learners. Because this is a social sight, with dynamic content created by all you users, we get to see aspects of your personality that perhaps an A in Chemistry does not accurately portray.

17. anonymous

could you delete my question from yesterday?

18. anonymous
19. anonymous

And how about problem solving respective to question's difficulty? Solving one FoolForMath's problem is way tougher than solving hundreds of slope, line problems. Maybe you guys can allow some users on openstudy or you yourself to increase problem solving score on some problems. Problems posted by KingGeorge are solvable by none on OpenStudy almost none. So, problem solving score for KingGeorge's problems should be much much higher. I don't know if you have already accounted for that.

20. anonymous

We do indirectly consider this: we consider receiving medals from high level users to be more significant/valuable

21. anonymous

And one thing more, how do you calculate smart score, my smart score is 99 but my problem solving is 93, teamwork is 75 and engagement 71. How did my smart score got 99? It shouldn't exceed 93, right? I can be wrong and I think I'm wrong.

22. anonymous

Each category is first evaluated individually. The overall score is comprised of each category, but necessarily the score that you see which is between [0,100]. In the system you have a raw score of points accrued for each category, which we then use to determine your depicted score, that in the range [0,100].

23. anonymous

*not necessarily

24. anonymous

Oh, hmm I don't think I still understand it, but Thank you for your time :-)

25. saifoo.khan

Awesome. PS you posted same paragraph twice. ;)

26. anonymous

haha, thanks @saifoo.khan got carried away!

27. saifoo.khan

@dpflan . ;) @ujjwal , where are those "invited users" thing? i can't see it

28. ujjwal

I saw it here.. see in TEAMWORK.. http://openstudy.com/users/saifoo.khan

29. saifoo.khan

Oh yes, that's 2. which seems wrong.

30. ujjwal

wrong??

31. saifoo.khan

Yes, i invited more. @ujjwal

32. saifoo.khan

but nvm.

33. anonymous

Saifoo, all of those users you've been signing via the link up for your internship now give you points :)

34. anonymous

"Invited Users" refers to those whom you've sent invites to and who then actually sign up and use OpenStudy.

35. ujjwal

I have never seen so many users viewing a single question all at a time with full patience...

36. ujjwal

and how do we send invitations?

37. anonymous

The formula is (still) a secret isn't?

38. anonymous

I still don't understand how the score is calculated. It's not the sum of your subscores and it's also not the average...

39. anonymous

You can share your questions and SmartScore now, primarily you share these on social media, this sharing can be seen as an invitation. You share by posting to Facebook or Twitter. This is great because you are reaching out to people you know who could help on OpenStudy. This then refers to people whom you invited and who joined OpenStudy.

40. anonymous

@FoolForMath Correct, the actual math behind the calculation has not been released, like that secret formula for Coca-Cola.

41. shadowfiend

Notably, we haven't actually migrated old invitations, so right now it's just people who you have invited since we deployed SmartScore. Working on the migration now :)

42. anonymous

I like the way you defined: "The score is a measure of your efforts on the site relative to those of your peers"

43. anonymous

Virtual (+1) for that.

44. anonymous

virtual *high five* @FoolForMath

45. anonymous

so we just post this in fb/twitter and if someone joins coz of it our Invited Users will rise?

46. anonymous

haha, Cheers mate! :) @dpflan

47. saifoo.khan

That's cool. i just invited someone yesterday as well. ;D

48. anonymous

^show off :P

49. anonymous

Ohhhhh... so this is what I've been coding on for the past two weeks.... it all makes sense now. ;)

50. shadowfiend

Ok, now the invites are up to date, but there weren't many. Keep in mind that to be counted in this, you have to use one of the share buttons. Either that or you have to include ?inviter=<your user id> on the end of the URL you send. Otherwise we have no way of seeing who sent you.

51. anonymous

@Ishaan94: I have never seen KG's problems before. I just solved a couple of them now.

52. anonymous

Thanks for pointing it out.

53. anonymous

lol @FoolForMath never refuses achallenge =))))

54. anonymous

Oh hmm well you're FoolForMath, users like you and zarkon aren't supposed to be included under general observation.

55. KingGeorge

Thanks for looking at my questions :)

56. anonymous

I think it would be nice to see the smart score change on the subject that you are logged into. I always looked at the ranking of the person answering questions as a badge of merit of sorts giving more credit to the answers of the higher ranked person. If someone is great at math and achieves all his medals and fans there then it doesn't seem that we should automaticaly assume he is amazing at biology or another subject. I like the idea of the smart score and maybe it will take me some time to get used to it but I would change that small aspect of it if given the chance.

57. anonymous

Do you know what's the sad part? "You took the time to help the asker understand a new concept, and the asker thanked you with a medal." Yeah, this doesn't happen. It's usually passing users who thank the answerer, not the asker. Okay, minor nitpick. It's not really important.

58. shadowfiend

We're looking into how we can deal with the subject part of the score. Nothing's been decided yet.

59. anonymous

How does "Medals for Asking" if the medal box from questions has been removed?

60. ujjwal

My smartscore is 60.. But I am probably zero in biology.. However, even in biology group, my smartscore remains the same.. And I think something should be done regarding this.. we are not equally good in all subjects and smartscore doesn't make it distinct.

61. anonymous

@ujjwal Your score is not determined by subject specific performance. This score is meant to capture those elements of you as a learner that are present with respect any subject you can study. We are not measuring your specific ability to answer the questions in a given subject, we are trying to establish your skills and talents related to the overlying idea of each category [Problem Solving. Engagement, Teamwork]. For instance, there are many types of problems to solve, mathematics, biology, history, etc, you may prefer to answer questions in certain subjects over those in other subjects, but you are still exhibiting actions and behaviors that display the willingness and aptitude to solve problems. We are trying to remove the subject specificity. You can still see in your profile your top 3 three groups where you have participated, and people can see all the question you have asked and answered and all the medals you've received in your profile too.

62. anonymous

But, you're correct in a sense, and @zbay raises a valid point: when there were group specific scores you could actually see whether a user was established as a solid source of help within the given subject. @shadowfiend mentions above how, while we develop this concept of SmartScore, we will look into determining subject specific metrics.

63. anonymous

@arcticf0x (or should I say Federer?) With regard to your question: I'll refer you to this question: http://openstudy.com/study#/updates/4f8df682e4b000310fac6912

64. anonymous

@badreferences You mentioned the infrequency of askers awarding answerers. This has been considered, and it represents the ideal scenario: asker and answerer(s) engaged in a common goal of elucidation and understanding of a concept. There many possibilities for the reasoning behind the fact that perhaps you've noticed that askers do not award as frequently -- unaccustomed to the system, time constrained, distracted, etc. But when the dynamics of a question are one where the asker and answerer(s) are fully interacting, have you noticed the same issue you mentioned? In the end, your answer is still being recognized by other users which is awesome. But yes, that is an issue to look into.

65. anonymous

What about giving medals to questions?

66. anonymous

@nickymarden We currently look at questions you asked where you gave a medal. Expect modifications and improvements to the score that consider your role when you give a medal to someone: you could have: (1.) asked the question, (2.) participated in another person's question, or (3.) you could have just observed another person's question

67. Carniel

So the more fans you have the higher your Teamwork?

68. anonymous

@dpflan as far as the major-specific scoring goes, why don't you just add an option for users to choose for themselves? Like "Go to profile and select a couple specialties so that other users know where you excel." Then, give the avatars a color code or a little icon stamp to identify.

69. anonymous

@Carniel As stated above, fans contribute partially to teamwork, and not exclusively.

70. anonymous

(P.S. Mods, this thread needs to be sticky) perhaps create a seperate Help and FAQ group...

71. anonymous

@Dyiliq What you suggest is good in order yield a more accurate representation of the user. You can gauge subject competency/interest by looking at the "Most Active Subjects." This only lists 3 subjects so far. This is information is not directly available when that user is helping you with your question in a given subject. As we continue to develop the SmartScore and related concepts, having more fine grained and representative information easily accessible will be crucial. Thanks for the suggestions/critiques.

72. anonymous

@sasogeek I'll refer your point of interest to the following question: http://openstudy.com/study#/updates/4f8da4ffe4b000310faa6f70 Site performance is taken very seriously as you see @mattfeury @shadowfiend @farmdawgnation discuss in the above question.

73. anonymous

- Testimonials These are written by the people who really think you’re awesome, and they just had to tell you. messages is for that now! :D

74. anonymous

Hi saso, as always I'm disappointed to hear that you're disappointed. ;) As I mentioned in the thread that Dan was kind enough to link to - if we've stopped focusing on performance it's because we're not seeing any issues that jump out at us at the moment. That said, we've progressed leaps and bounds over the past month on the performance front. @shadowfiend and @mattfeury did a lot of good work in isolating and patching issues on the server related to our ability to garbage collect efficiently. Full GCs (the ones that cause everything to pause) have gone from taking ~70+ seconds after 24 hours to sitting at around ~7 seconds after 24 hours. We're not done on that front. That's a huge improvement that I personally have seen reflected on the site. Additionally, I've been involved in a lot of client-side optimizations. Specifically focused on IE8, but the improvements are visible in other browsers as well. If it's taking you 7 minutes to type something it would sound like your issues are on the client-side realm that I have been optimizing. I'm working on some more changes today. As always, if you're using Internet Explorer as your browser, we encourage you to use Firefox 11 or Chrome. These browsers provide superior performance for client-side issues over IE. If you are having issues in FF or Chrome, then we can try to diagnose those. I've got some plans for some client side testing code that we can run to remotely diagnose issues like this, but it's not in place yet so we can only see so far into how our code runs on your machine.

75. anonymous

@Tomas.A "Messages is that now" what do you mean by that?

76. anonymous

I suggest that a site performance focused discussion take place/continue in the question I linked to, so that this question remains on topic related to SmartScore ;) Discuss: http://openstudy.com/study#/updates/4f8da4ffe4b000310faa6f70

77. anonymous

i said messages is for that, to write to people that they are awesome..

78. shadowfiend

Indeed folks, please keep the performance discussion out of this fine (and informative) question. @Tomas.A While messages can be used to tell someone they're awesome, fan testimonials announce it to the world, as well.

79. Hero

I appreciate all of the efforts being taken to improve open study. Guys like @Dpflan , @shadowfiend, @farmdawgnation , and @cshalvey and all the other mods/staff who are doing all of the behind the scenes stuff.

80. Hero

That being said, I really see a full clear explanation as to what components go into measuring/calculating qualities such as teamwork and engagement.

81. Hero

Basically, I would like to know precisely how such things are calculated.

82. anonymous

@Hero we don't get to know that, it's proprietary. Especially since it is an incomplete beta feature (am I wrong, @shadowfiend ?)

83. anonymous

I said it before, and I say it again: we need to methodically lower our scores to reverse engineer the formula! :P

84. anonymous

I checked my new SmartScore a few days ago and saw that it was 96. After a few days absence because of illness, I logged back in to OpenStudy and now I'm only a 60. ???

85. anonymous

@badreferences How can you make that work if the scores are modified on the individual user level by moderators?

86. anonymous

@Atam Pleez view the rest of this thread. Scores are beta and subject to change.

87. anonymous

@Dyiliq - I read through the 50+ posts and my question was not answered. Obviously if I'm on this thread I understand that SmartScore is in beta. I'm evaluating OpenStudy from a student perspective for possible use by my institution and curious about why my score would drop nearly 40 points after a few days of absence. I think that's a reasonable questions for this thread, and am not sure telling a user to read through 50 posts for an answer is helpful! Just sayin... I have a conference call with Preetha next week so I will ask my questions then.

88. anonymous

@atam it was answered in other thread they changed formula that's why your score dropped because too much people had to big score (I guess) and it's absolutely not because you were absent, to make you more happy i was 99 and now 77 lol

89. anonymous

Thanks @Tomas.A , that makes sense.

90. anonymous

So a machine learning algorithm is behind SmartScore?

91. shadowfiend

For now, it's all human learning and statistics ;)

92. shadowfiend

@Atam in particular you saw the very first draft, whose purpose was for us to see if there was a skew to the scores. It turns out, there was a very big one ;) @Hero We do intend on being a bit clearer once things settle down a bit :)

93. anonymous

smartscore sounds like good stuff, a novel way of quantifying individual learners' attributes. can't wait until its open-sourced :-D

94. anonymous

@atam I was positive someone already asked that....for that I apologize.

95. anonymous

@agdgdgdgwngo We don't know if it's a novel way of "quantifying individual learner attributes" unless we literally have the formula. We can only hope. And complain. (Mostly complain.)

96. anonymous

@agdgdgdgwngo Also, is there an IPA pronunciation of your username? I'm really curious.

97. anonymous

@badreferences Lolz on the complaining...and double lolz the username question. I was beginning to miss your sardonic anecdotes. But go easy on the proprietorial SmartScore... things like that definitely don't need to be picked up by another, better funded but less intelligent, study site. It would ruin this one. I think the derivation of these scores should remain proprietary. as intellectual property.

98. anonymous

Their goals for smartscore are a genius mechanism that may actually be considered in employment someday.

99. anonymous

@badreferences Do you feel that your score reflects your efforts on OpenStudy? This is a weak test of the validity of the metric without releasing the formulation ;)

100. anonymous

@Dyiliq I'm afraid I don't see employers considering "hidden scoring rubrics" for potential employees. The SAT, ACT, GRE--and even Weschler/Sanford-Binet IQ tests--have proprietary question randomization "methods", but they make it pretty clear how people get certain scores, and how such scores are normally distributed. I might only be speaking for myself, and I'm not exactly an expert on the subject, but if I were an employer, I would see someone's SmartScore, and ask, "What does that 99 tell me about you?" "That I have problem solving, team work, and engagement!" "Yeah, but what do those mean?" "Well, problem solving means, uh, I solve problems and apply critical thinking. Team work means I get along with other people. And engagement, uh, I know how to get involved in the problem solving process--" "Yeah, shut up. That doesn't mean anything to me. How did they assign a number to any of the above? A written review of your performance is one thing. Basing your problem solving, teamwork, and engagement skills on a 'score' seems a lot like basing intelligence on an IQ test. It. Doesn't. Work. Unless you tell me exactly what you're measuring, and even then it might not work. (Case in point--IQ test.)"

101. anonymous

@dpflan I really don't know what my score represents, so I can't comment on how good a reflection it is of me. My gut instinct is, "Oh God, I got a 69. That's a D+! I'll never get into grad school now!" But I don't know the mean, the standard deviation, or even the scoring metric. Lacking these things, I cannot seriously judge myself--not at all considering personal bias, because I'm obviously the best. Around. Nothing's ever gonna get me down.

102. anonymous

Well, right now it definitely will get nothing but looks of derision. But someday....

103. anonymous

@badreferences Yes, the algorithm is still being modified. Once settled, we plan to elucidate what makes a SmartScore. Perhaps, it might be a fun thing to post in the Mathematics group.

104. anonymous

Your gut reaction is something that we're trying to modify, we could scale your score up so that you have 690. But the familiarity of [0,100] scale and its association with conventional grading made the score seem like something people could sort of "get." But, it may be better to have our own scale, or it may be useful for changing the conventional viewing/association of [0,100] values and "intelligence" with grades. But again, we're going to tweak things and keep you in the loop as soon as we can.

105. anonymous

So, your feedback on this is crucial and appreciated. Understanding your immediate reaction to your score is really helpful.

106. anonymous

@badreferences @Dyiliq @Hero @FoolForMath and anyone else who views this question, I'd appreciate learning your initial reactions to the score and thoughts related to improvement. [I will have to return to this thread later because I have class from 6pm-730pm, so I won't immediately be able to talk with you]

107. anonymous

@badreferences: See how @dpflan defined the smart scores "The score is a measure of your efforts on the site relative to those of your peers" so your gut instinct is not justified here. Judging by your activity in the site I can safely say that you know more math than quite a few users with even 99 scores. So it's just a rough estimate of how much you are active in this site. No need to over-think :)

108. anonymous

Said that, it is very hard to put a number on anybody's intelligence/smartness.

109. anonymous

I'd prefer not to be graded on "knowing more math", but rather on "problem solving", "teamwork", and "engagement". The things we are technically being "graded" on. Unfortunately, it's easier to grade math knowledge than it is to grade the others. For the Weschler IQ test, we are given patterns of shape deformation. It removes the problems of linguistic skills, mathematical knowledge, process abstractions, and memorized strategies. The IQ test is just that--it tests how well we can recognize patterns, and not a bit more. Not math, stat, linguistic, strategy, etc knowledge. Unfortunately, however hard they might try to make the IQ test as "basic" as possible, it will only remain (forever) a test of pattern recognition. Not intelligence. Does that mean the IQ test is useless? No; it's fun to do at times, much like regular quizzes and brainteasers. My point being that my expression of dissatisfaction is not really actually dissatisfaction--I'm well aware that it's unlikely these scores are truly grading "problem solving", "teamwork", and "engagement". But my question is similar to that of the IQ test: I know it's not grading intelligence, but then what specifically is it grading? Pattern recognition? The ability to collect medals quickly in a short period of time?

110. anonymous

lol that's enough typing for one day, I'm going back to watching television, like the good student I am. :P

111. anonymous

My hypothesis: It's grading how much active you are in this site. And "Problem solving", "teamwork", and "engagement" are the parameters here. More smart score means you are more active in this site less means you are less. Ref to the nikvist example cited by Ishaan.

112. anonymous

"This would eliminate gaining score for sporadic 'lols' and 'high fives' and other fruitless posts that neither add or subtract from the productivity of the Q." Darn, this accounts for 95% of my medals. D:

113. anonymous

@dpflan My initial reaction was "Wow, finally, a study group site that incorporates user expertise without omitting the importance of user curiosity. Double wow, they even have a score to indicate just how 'studied' my helper's are!" But alas, I learned that the score is in beta, so I was momentarily discouraged from even joining the site. Then I saw the music study group question.....it was all downhill acceleration from there. One way you could improve the accuracy is by creating a instant-flag system that automatically indexes all posts in a group (Q's and A's) according to keywords. Keep this invisible to users, or else it will severely bog down your site. Once the index is made, you can then link the users to their keywords and guage each of their responses to have an impact on their score based on how relevant their post is to the subject. This would eliminate gaining score for sporadic 'lols' and 'high fives' and other fruitless posts that neither add or subtract from the productivity of the Q.

114. anonymous

@dpflan Then there are those who are given medals for incorrect answers, DIRECT answers (which really peeve me), and incorrect help....something needs to be done about that, and I'm not sure what else you could do besides. @badreferences that reminds me, I should have added a clause in that paragraph about medals as well. There are medals being awarded all over the place for funny posts. Studying is not funny. Studying = student - ent(husiasm) + dying. Medals should be given on how enthusiastic and alive students are in their drive to help others as well as learn from others.

115. anonymous

Is the score constantly up-to-date or does it recalculate every once in a while?

116. anonymous

constantly

117. anonymous

ohhh ahhh

118. shadowfiend

While I don't speak in terms of what would be ideal in the SmartScore or not, I'd argue studying can and should be fun. Study groups exist precisely to ease some of the pain of studying. Excessive awards of medals for fun and funny things are, of course, a problem, but a small modicum thereof may in fact be precisely the thing to keep studying on OpenStudy fun and lively, rather than drab and boring. Just a thought.

119. anonymous

in all of this what is the highest rank?

120. anonymous

100, or score. maybe.

121. anonymous

yes, 100 is the highest possible score. The scores range from 0 to 100.

122. anonymous

Wait a minute.....Why cap at 100.... Why not make it a continuously accruing point system? Then it wouldn't matter whether the posts are relevant or not, if you couple that with the keyword-linking system I mentioned earlier. It wouldn't matter because the keywords would be getting the points, not the posts. Keywords can also be attributed to each individual study group, as well as each tripod of the SmartScore system. And it would be simple to just add a clause in the CoC about posting arbitrary keywords at the end of irrelevant posts, which would be easy to oversee and discipline.

123. anonymous

You could still give points for irrelevant posts, in a small amount, just for participation! (Same as attendance)

124. anonymous

Then it would be easy to see the slackers by just comparing their 'enroll date' with their overall score

125. blues

I think it is capped at 100 because Satellite and Amistre have yet to figure out how to have higher SmartScores than more than 100% of the users on OpenStudy. Not precluding the possibility that they will, of course...

126. anonymous

Why is that even necessary? @satellite73 and @amistre64 are obviously integral cogs of this machine, why not hide moderator scores? Mods and builders shouldn't even need to worry about that kind of stuff...the user has a privilege here, in the ability to be here and learn from this site. If they achieve positive results and are satisfied with their experience, they owe it all to the team behind it. Their scores should automatically populate from 9999999 and count down from there, and make it a race for fun who gets to zero first!

127. ujjwal

how is this possible?? Questions Asked 0 Questions Answered 0 Medals Received 12 I found this in satellite73's profile..

128. anonymous

no one gets 100 as their smart score? i saw the maximum being 99 moreover it would really cool if we could give medals to more than one person like the old version as beginners get thrilled on receiving medals and come back to explore the site and it would be encouraging for their efforts if only one best answer can be chosen,it would be the same person getting all the medals(the top answer who knows how to give a convincing answer put in the right format)

129. anonymous

No one likes my race to zero idea eh?

130. anonymous

@salini Someone has 100; satellite73.

131. anonymous

Even satellite is 99 now.

132. anonymous

@Ishaan94 ಠ_ಠ

133. Hero

Satellite alternates between 99 and 100 give or take

134. shadowfiend

Satellite's oscillations are a bug in the data structure that's tracking the distribution. We're trying to track it down at the moment.

135. amistre64

i thought i was a user of this site :/

136. amistre64

still got no clue what all the smart score means tho :) just goes to show how dumb i can be i spose lol

137. Hero

The need to track down a bug gives the impression that the bug is a suspect in an OS related crime.

138. amistre64

sounds very zen-ish

139. Callisto

Summary: I don't like the idea of considering ''question's difficulty'' in problem solving As I've observed, there are 2 types of question here. One is really for seeking help, no matter what the purposes are - learning, copying... The other is giving challenges - for bored ppl, or anything. Of course, this type of questions is usually more difficult and takes you long time to get the answer. But do they really deserve a higher counting score? I really can't agree to it. And why does solving more difficult questions like that should have a higher score counting? -1-1 can be easy. Or it is extremely easy. But for a new learner who knows nothing, it can be very difficult to understand. Somehow, the level of difficulties depends on how much you know, how well you know. How can we judge it? I can't deny that solving one FoolForMath's problem takes you a long time, but helping someone to understand the concept like -1-1=-2 can also take you a long time. How can we weigh it? PS : I mean no offense.

140. anonymous

my thoughts are as the one above.

141. anonymous

The difficulty of the Q can have a MASSIVE impact. But you're on the right track. If there were to be any differentiation it would have to be in the way of respective denominations of grade-level groups. i.e.: GED Q's High School Q's College Q's (Maybe another quasi denomination for the questions that even the PROFs can't answer)

142. anonymous

@Callisto Yes, difficulty can be seen as a relative and subjective concept. @Dyiliq Assigning sub-topics/tags for questions will be crucial in the future for obtaining more meaningful data about the questions being asked and the questions that a user answers on the site.

143. anonymous

(Maybe another quasi denomination for the questions that even the PROFs can't answer) - Open Question.

144. anonymous

One of the hardest problem is actually to understand how $$(-1)\times (-1)=1$$ , this had baffled even the great Euler.

145. anonymous

So, how is $$(-1)\cdot (-1) = 1$$?

146. anonymous

And No, just because you're not good at math and some elementary problem is difficult for you, doesn't mean you're an expert in problem solving when you solve it. Harder problems should have better problem solving score.

147. anonymous

@dpflan I figured this was in the works... @Ishaan94 because -1 goes into -1 one time. Not negative one time.

148. shadowfiend

Ishaan—problem solving is not a measure of knowledge, but of process.

149. anonymous

@Ishaan94 I do not see any safe, objective way to determine the difficulty of problems.

150. anonymous

The safest, most reliable method in the past has always been: pre-K K 1st grade 2nd grade ... 12th grade freshman sophomore junior senior etc.

151. anonymous

Even though curriculum varies from school to school, the Board of Education is generally unanimous.

152. anonymous

@Dyiliq Labeling these topics using the conventional grade system of US schools may actually be less helpful than subject and sub-topic. If we have sub-topics, and even sub-sub-topics, then we can actually see what concepts are being worked on then. Then we will have an idea of the types of questions and we can determine perhaps difficulty based upon rarity (but this could also convey ease because fewer people need help on such topic). What if you a user could assign the difficulty of a problem, but no else could know the user's feelings about the question, or if a question could be labeled as review, clarification, or new concept. If the user's perception of difficulty is obtained in addition to the sub/sub-sub-topic of the questions, then we can get a beter gauge on what the user's strengths and areas for improvement are. Also, with sub/sub-topics we can more easily assign areas of interest/study for askers and "expertise" for answerers. @Dyiliq Because it will be difficult to easily assign subtopics/sub-subtopics to all questions, what do you tink of a human computation approach to archiving and creating a taxonomy of historical questions? What I mean would be something where certain users of X minimum SmartScore could be presented with random questions, review the contents, and assign one or two sub/subsubtopics. Ratings will not be visible to other users until a significant consensus has been reached by having at least Y users assign sub/subsubtopics (remove bias). So the most agreed tags will be accepted. Then they can be checked. User names in the question may be made anonymous to avoid biases too.

153. anonymous

@dpflan you asked [me thinks]: "What if you a user could assign the difficulty of a problem, but no else could know the user's feelings about the question, or if a question could be labeled as review, clarification, or new concept[?]" ^^^^I think that some aspects of this question can be applied--in combination with other methods--to the resolution of our problem. Your next question brings it around nicely.---> You asked: "Because it will be difficult to easily assign subtopics/sub-subtopics to all questions, what do you tink of a human computation approach to archiving and creating a taxonomy of historical questions?" ^^^^Your evaluation of this approach is the remainder of the aforementioned proposal. In total, your entire previous post gave me an idea: Why not assign (invisible) 'grade-levels' to each user at the sign-up step of membership on this website? For example, a new user--after entering email, handle, password, etc.--would be given a survey of questions (like an IQ test, but not focusing on the same attributes) that would gauge the users ability to answer questions in each level of difficulty. This could even be accomplished where said difficulty varies in an analog way. Now, when a user is answering these Q's, the 'taxonomy' could archive their speed as well as accuracy to determine the level of difficulty they are prepared to take on. This could be something that also evolves as their participation in the website goes on. With this method, assigning any kind of indicators or tags to the Q's themselves would be unnecessary, unless if only to make a tagging system that would alert users that a question in their range has been posted. I can go on in describing this, but I'll stop here to allow some breathe time for myself, and questions for all of you. :)

154. anonymous

If problem solving isn't a measure of knowledge then why does satellite has 100 in problem solving?

155. anonymous

Quoting Dan here: "The score itself is determined relative to the scores of other users." By our algorithm satellite is has performed better than any other user on the system do date in that particular category, so he gets the 100 score.

156. anonymous

I have not read every point brought up--and I don't like contending anything without having a reasonable understanding of the subject--but if this hasn't been brought up before, then I'd like to point this out right now. Your classical "difficulty" ratings in the US curriculum do not represent whatsoever the difficulty of the problems themselves, and thus can lead to many false impressions. I've schooled in college/secondary in China, France, Poland, and England before. In Poland, college level linear is secondary fare (up to projections, at least). In China, multivariable is dealt with before college. (Yet, funnily enough, there is very little mention of sequences and series.) Also, in some French schools, we deal with the absolute basics of abstract, topology, and formal proofs before college. I never stayed long enough to see how far those went, though.

157. anonymous

Even in the US, I tunderstand there is a lot of variation in taught subjects. Even more than in other countries. Exeter High teaches multivariable and linear before college, whereas Norwalk High only gets up to Calculus 2. In short, the "K-12 grade abstraction" is a terrible way to rank difficulty.

158. anonymous

@badreferences "Labeling these topics using the conventional grade system of US schools may actually be less helpful than subject and sub-topic. If we have sub-topics, and even sub-sub-topics, then we can actually see what concepts are being worked on then. Then we will have an idea of the types of questions and we can determine perhaps difficulty based upon rarity (but this could also convey ease because fewer people need help on such topic). What if you a user could assign the difficulty of a problem, but no else could know the user's feelings about the question, or if a question could be labeled as review, clarification, or new concept. If the user's perception of difficulty is obtained in addition to the sub/sub-sub-topic of the questions, then we can get a beter gauge on what the user's strengths and areas for improvement are."

159. anonymous

We probably shouldn't use any school system as an analogy for question difficulty. There are users who are taking courses for fun, who are re-starting their studies, etc. Only the topic being studied matters. This way we get a frequency count related to topics being studied, and we can also see the progression of topics a person studies. We could then also analyze relations among topics in subject x and subject y. Perhaps a students like a math topic that is also highly prevalent in Physics, then we could suggest other fields of study for example based upon your interests.

160. anonymous

A problem is easy if you can solve it, a problem is hard if you can't.

161. anonymous

No... wait, I never you said you to rank question's difficulty on the basis of topics. Maybe you can count upvotes or good question badges awarded by senior users and then increase the problem solving score for a selected problem.

162. shadowfiend

To answer your original question to me, if you have a tough logical puzzle that requires nothing further than a basic understanding of logic, it requires no more knowledge than a simple logical puzzle; however, it does require stronger problem solving skills.

163. anonymous

@dpflan did you get anything substantial from my response on 4/22?

164. anonymous

@Dyiliq The evaluative approach is interesting, but slows the sign up process down and is focused a lot on answering questions. Many users sign up to have a question posted and answered. I think that subtopics is a more objective approach, though if we had a database of schools we could allow users to select their school upon sign up or editing their profile. This would help with demographics to. A taxonomy of subjects and subtopics is more school agnostic in that different schools have different curriculums and teach subjects at different levels. Keep the ideas coming, man!

165. anonymous

@dpflan you said "Many users sign up to have a question posted and answered." Wouldn't this be more conducive to "HomeworkEqualsDone.com"? I think that if collegiate users are serious about utilizing the "Worldwide Study Hall" aspect of this site, they won't mind answering a few placement questions to keep the attention to their own questions coming from the appropriate sources. Likewise for the more adolescent scholars. I do like the idea of having a selection of schools at-the-ready for users to choose from. But that still yields the problem for high-school students. Having them choose their school doesn't really segregate the subject matter of their study needs. Subtopics will be necessary no matter what you do... now that I think about it. But this is something that will solve itself with the segregation of their level of questioning...

166. anonymous

To add on to my first paragraph, the resulting effect of answering those placement questions will eliminate the necessity for the adolescent scholars to weed through really tough or out-of-their-league questions to answer some in their own rite. Likewise for the more collegiate users. :)

167. anonymous

The thing that most segregates their study is most obviously the subject they are studying. Level in school is a much broader filter. It would be useful to obtain someone's school level and see what they are studying. We would get a better idea of what topics are being studied worldwide at different and similar levels. You are suggesting filtering techniques to aid answeres

168. anonymous

Imagine that a new asker doesn't know what to expect when entering OpenStudy, and they ask question. Then either someone shows up and walks the new asker through how to answer the question, teaches the new asker, or the question goes unanswered, or the qusetion is immediately answered/answer just given

169. anonymous

The same filtering techniques would aide the questioners, as well... By placing their questions at the appropriate table, so to speak.

170. anonymous

Yes, it would aid the asker because they may more easily get an asnwerer

171. anonymous

Yes, there are tables for this subject and that subject... but then the sub tables would segregate their questions by level of difficulty

172. anonymous

Those sub tables would be made possible by the opening questionnaire.... which wouldn't have to be anything rigorous, I wouldn't imagine.

173. anonymous

A simple, 10 minute college-placement test would suffice. 3 or 4 questions from each major field of study, with an option to skip, or course. The 'skip question' would be an immediate flag for the level of question a user would be qualified (for lack of a better word) to answer.

174. anonymous

That is only a suggestion, to add imagery to the idea. Not necessarily the way you all would maneuver it.

175. anonymous

@Dyiliq right. Constructing such an assessment test is seems more difficult that implementing an add subject, subtopic, etc function.

176. anonymous

Oh, I'm not trying to coax a veto of that idea at all. In fact, like I said, it will be absolutely necessary. But I don't think you'll get it to function correctly without the appropriate link to the user. (Not just the user selecting the topic and subtopic, but the topic and subtopic will need to be selecting the user.) The creation of such an assessment will be a piece of cake. All you have to do is envision a three-tier model of "question difficulty" and create one or two questions for each tier in each major field of study. The user themselves will be selecting which topics they wish to study, and they will be able to answer the questions up to the tier that they are naturally comfortable asking questions in. It only seems like the natural course for a website of this type to go, and it would take no longer than anything else you all have worked on so hard to come this far.

177. anonymous

@Dyiliq word, explain this: "(Not just the user selecting the topic and subtopic, but the topic and subtopic will need to be selecting the user.)" Is then related to recommending future areas of study, related ares of study, etc? Well, I must say the combinatorics of creating subject specific evaluations of 3 tiers of 2-3 questions each will be massive as we expand. I think that subject/subtopic should be implemented first before an evaluation is created because users will inherently express their "level" by the questions they choose to answer

178. anonymous

When I say the topics need to be selecting the users, I mean that a biochem question asked by a fourth year college student is not even going to be looked at twice by a high school chemistry student. And fourth-year college students will breeze over the high school questions in search of a greater challenge. No one wants to be Mr. Smarty Pants. And a high school chemistry student will never be motivated to answer a question him/herself if he/she has to wade through 25 of the aforementioned college questions. before finally skimming over one they might be able to answer. So let there be a preliminary questionnaire that segregates these questioners, while still leaving highschool BIOchemistry available to highschool students, and College bio chem available to college students. Not to separate by grade-level, per-se, but to create a bit of a community where there are differently advanced groups that recognize one another, but don't cloud the atmosphere with meaningless wandering through things that are too regressive, or too complicated; questions that eventually lead them to settle into the OpenChat and become lost forever to the back and forth I KNOW you all are aware of. And the major subjects can't expand much farther than they have already... That's not including the sub-topics, anyways...But the subject-specific evaluations can expand symbiotically, can't they?

179. anonymous

A subject is incredibly broad. There are so many types of maths and areas of study of math. Picking representative questions for difficulty level is difficult. I don't think we have the current ability to do this, none of us are educational difficulty experts. I think the concept is valid, I also think the time related to developing it is far greater than simply allowing a user to say I am studying calculus -- add a tag, then allow search upon that tag. And the end result is essentially the same: Better visibility for questions I want to answer

180. anonymous

True for math...

181. anonymous

I just think that first filtering on school level, then on subject/subtopic level adds an extra step when the goal is merely to find subtopic, and if subtopic is searchable, then just allow search on subtopic

182. anonymous

Bingo!

183. anonymous

Allow the users to filter themselves, and also their questions...

184. anonymous

Perhaps a dedicated admin for filtering questions to the correct tables.

185. anonymous

You're right, it is pretty complex thinking in the broader futuristic scope

186. anonymous

Check this out... http://hyperphysics.phy-astr.gsu.edu/hbase/hph.html

187. anonymous

Have the concept bubbles lead students into 'rooms' where each question be posted as a bulletin on the 'wall'

188. anonymous

Then they look at open study homepage and say "I wanna ask a trig math question" so they click math/trig

189. anonymous

And you could make that go as deep as you want

190. anonymous

I tried to prove negative times negative equals positive

191. anonymous

Suppose we want to know what is -2*-6 = We know -2(6 + (-6) = 0 [ We can check it using a number line if anyone is not convinced] We know the distributive property So, |dw:1398924639750:dw|

192. amistre64

-k simply means, the opposite of k, and means -1(k) -k(-n) , by association we get -[k(-n)] now multiplication is just quick addition: $k(-n)=\underbrace{(-n)+(-n)+(-n)+...+(-n)}_{k~times}=-m$ subbing back in this gives us:-(-m): the opposite, of the opposite of m, is m. just an idea

193. Koikkara

Well, i round it up to 55 medals !.....lol Have A Good Day, Nice to Meet You !

194. TheSmartOne

With 66 medals from this one question I don't know how that doesn't increase your SS...

195. zBrandz23

Wow very good information about how the scores work gonna work on getting my teamwork up to 99:D

196. ShadowLegendX

68 Medals o-o

197. anonymous

69, lol

198. anonymous

71

199. confluxepic

72 medals. That's an all time high. New record.

200. King.Void.

o_o

201. Nnesha

77 :P

202. One098

78 >.>

203. confluxepic

It will keep growing.

204. confluxepic

This post cleared my doubts on the sub scores.

205. Nnesha

80 o_^_^o

206. EclipsedStar

Same here, I learned a lot about subscores here.

207. khalilforthewin

thats crazy

208. TheSmartOne

My tutorial -- http://openstudy.com/study#/updates/543de42fe4b0b3c6e146b5e8 -- has officially surpassed this post in most medals :D

209. confluxepic

@malcolmmcswain

210. Khalid3166

98 medals!?!?!?

211. rebeccaxhawaii

101 medals

212. TheSmartOne

well the account got deleted so r.i.p all the medals

213. One098

xD Lol.

214. EclipsedStar

Bringing up old posts, are we e_e

215. TheSmartOne

and bringing back old users too ;)

#### Ask your own question

Sign Up
Find more explanations on OpenStudy
Privacy Policy