Friday, 15 March 2019

Is re-branding worth it? A case study...

When I wrote the Eden Paradox series - 4 books published between 2011 and 2014 - particularly the first two, but also the entire series, sold well into 4 figures (i.e. in the 1000-6000 range, mainly eBooks). But after 2015 sales waned. By 2018 sales hadn't exactly flatlined, but it was a slow, coma-like pulse of a few books a month.

At the end of 2018, after discussions with the publishers, I re-acquired the books. I then produced new covers for the first two, using better artwork more consistent with the style of the last two books, and re-published them myself on amazon kdp. The re-pubishing part was free though laborious, but by Jan 25 I was ready to re-launch the entire series. I didn't do much personally in the way of marketing as a few 'life events' were getting in the way, but the results - whilst not spectacular in terms of best-seller sales - were heartening, as you can see from the graph (250 sales over 6 weeks) taken direct from KDP's site yesterday.

In particular book 1 has done well, but a healthy number of readers are going on to read the entire series (it's early days so I hope the sales of books 2-4 will strengthen). Most sales are in the US, with UK close behind, with some in France, Canada, and one Aussie.

I'll be launching a new book (the first in a new series) by the summer, so hopefully if that one does well it will keep an interest in this series, too, and vice versa.

So, is it worth all the effort to re-brand and re-launch an old series? Well, as with most authors, it's not about the money, it's about being read, and for me this has been a definite 'win', having gained a new batch of readers for a series I poured heart and soul into five years ago. I'll review sales figures at the end of the year, to see if they were sustainable.

In the meantime, for other authors who may be pondering this same question, the answer is simple - re-branding and re-launching can breathe new sales life into a series.

Sunday, 3 March 2019

On alien AI...

I'm editing the final chapter of the new novel, When the Children Come, and without giving too much away, the human protagonist encounters an AI (Artificial Intelligence). But this is not a human-created one, it comes from an alien race. So it doesn't carry any human baggage in its algorithms, including the stuff we'd like to rid ourselves of, such as hatred, our capacity for war, etc. However, it wants some...

AI is not new - whether your starting point is Asimov's robots, the robot from The Forbidden Planet, or Star Trek's Data, the typical idea is of a mature and pure intellect. Purity in Scifi seems to have nothing to do with good or bad, so AIs have often been portrayed as evil, ruthless and even malicious, though as a psychologist, I'm not sure how the emotional content would get coded...

The basic idea behind an AI is that it is a learning system, whether via neural nets, machine learning, or some other way, running on basic algorithms that can later become self-adapting. This means AI's can become very smart (though whether knowledge and data alone breed wisdom is another question), and make very fast decisions. Future warfare scenarios will probably involve AI's, who can make judgement calls (based on instilled values such as loss of life, mission criticality, collateral damage, use of resources, etc.) far faster than a battle commander - in theory. In practice warfare is about as messy as humanity gets, so there are rarely any clean decisions, so maybe (I'm hoping) there will always be a human in the loop.

The fear, most eminently expressed in the Terminator film series, is that the AI does indeed get smarter and decides humanity's fate in a nanosecond. But surely there could be an off-switch, or a way of detecting AI turning into 'malware'? Not so much, because of the complexity. There are big debates going on right now about whether AI 'thinking' must remain explainable. Think of self-driving cars for a moment. Imagine one such car runs down a cyclist. Why did it happen? To unravel the thousands of lines of code (make that hundreds of thousands, and probably millions), to work out why it behaved in such a way, is very laborious, and may soon become intractable. That's because when we go ahead and tell an AI to learn, we effectively push it out of the nest and let it learn to fly on its own.

So, coming back to science fiction, what about Asimov's three laws, that he implanted in his robots' psyches so they could never harm humans? Could we have such a 'backstop' in AI's? In the upcoming novel, the AI in question comes from a benign race. But it wants some of the bad stuff, and sees humanity as a way of overcoming the backstops in its core programming. Which gives the humans in question something of a dilemma, as they are going to need this AI in order to survive...

© Barry Kirwan |
website by digitalplot