On Mon, 30 Jul 2018 22:20:39 -0700 (PDT), David Kleinecke
Post by David Kleinecke Post by Peter Moylan Post by Peter Percival Post by Peter Moylan Post by Madrigal Gurneyhalt Post by Hen Hanna
Apparently, in creative writing courses (fiction and
non-fiction), one of the first things they tell you is to stop
using [Very] and [Suddenly] and other [-ly] adverbs.
The more you pay for a 'course' the more likely it is to be a
load of bollocks. This is a classic example.
Not entirely. I have noticed that I tend to overuse "however".
There are probably other words that I overuse. If I wanted to move
into a writing career, I would probably be willing to pay someone
to stop me from using "very" and "however"
Why? A word processor ought to be able to count the number of
occurrences of a particular word.
A word processor doesn't have the critical skills to be able to point
out which stylistic tics are likely to irritate readers.
Counting words would probably tell me that I use "a" and "the" a lot,
but that's not considered to be a writing fault.
If what I understand about neural networks is true what we
need to do is: Make a collection of short texts which people
like or dislike intensely and then feed these texts to the
network with the correct feedback.
"Feed these texts" is not an adequate description, unless you
have pre-defined a system of parameters. That is, "text"
consists of more than an the list of words. This suggests to
me an alternate system of training -- Create several texts using
exactly the same words (for the important words) in different
orders; start with something aphoristic, so it is "best". You
need a system - based on parameters - that discriminates among
Post by David Kleinecke
After enough training the
network will be able to predict the human response with good
With as many continuous, non-tied parameters as there are
texts, you can POST-dict, i.e., "fit", perfectly.
"Prediction" is tested with cross-validation. For a lot of hypotheses,
a lot of cross-validation. There should be enough cross-validation
so that crappy parameters will, indeed, show crappy results, or
you are just cherry-picking results, and your resuts will not work
when moved to the real world.
Post by David Kleinecke
But it will not be able to tell us why one text is
and liked and another disliked.
You need content-experts to set up the useful parameters. I'm
not sure of this, but I think that systems, so far, are "simple"
enough (though, with tedious detail) that any competent
statisitican shoud be able to track back to see the major influences.
And someone is probably guilty of malpractice if they don't
look that carefully at systems that make important decisions.
I'm curious - Has anyone heard of a system in use that was/is
considered un-checkable in that way?