Author Archives: Kae Verens

rewriting the trading algorithm

I have the script now running at a realistic 1.14% per day return, taking into account the average number of times that I’ve had to pay a trading fee, the average offset of what price a trade happens vs what price the algorithm wanted it to happen, and other unpredictable things.

The script as it is, is very hard to train in any way other than almost a brute force fashion. Because most of the variables (length of EMA long tail, number of bars to measure for ATR, etc) are integers, I can’t hone into the right values using a momentum-based trainer, so I need to do it using long, laborious loops.

There is one improvement that I could do, which would probably double or more the return, without drastically changing the script. At the moment, there is a lot of “down time”, where the script has some cash sitting in the wallet, and is waiting for a good opportunity to jump on a deal. If the script were to consider other currencies at the same time, then it would be able to use that down time to buy in those currencies.

On second thought, I think the returns would be much higher, because when the script currently makes a BUY trade, its usually sold again within about 30 minutes. That means that it could potentially make 48 BUY trades in a day, each with an average of maybe a .5% return. That’s about a 27% return ((1+(.5/100))48 = 1.27).

That would be nice, but it’s impossible with the GDAX market, because the GDAX market only caters to four cryptocurrencies at the moment. Also, while the money aspect is nice, I’m actually doing this more for the puzzle-solving aspect. I get a real thrill out of adding in the real-life toils and troubles (fees, unpredictable trades, etc) and coming up with solutions that return interest despite those. So, I’m not going to refine the hell out of the script. Instead, I’m going to start working on another version.

Before I started on the current version, I had made a naive attempt at market prediction using neural networks. While some of the results looked very realistic, none of them stood up to scrutiny. I failed. But, I also learned a lot.

I’m going to make another neural-net-based attempt, using a different approach.

The idea I have is that instead of trying to predict the upcoming market values, I’ll simply try to predict whether I should buy or sell. This reduces the complexity a lot, because instead of trying to predict an arbitrary number, I’m only outputting a 1 (buy), 0 (nothing), or -1 (sell). This can then be checked by running the market from day 1 of the test data using each iteration of the network, and then adjusting the weights of the network based on whether the end result was higher or lower than the last time it was run.

I noticed that the best tunings from my current script only made a very few trades, on the very few days where the market basically dropped like a stone and then rebounded. But I can’t assume those will happen very often, so I like to see a few trades happen every day. With the neural network, I can increase the number of trades by simply adjusting the output function that decides whether a result is -1, 0, or 1 (this is usually done by converting the inputs into a value between -1 and 1, then rounding to an integer based on whether the value is above/below 0.7/-0.7. That value can be adjusted easily to provide more 1/-1 hits).

The current approach also involves a lot of math to measure ATR (average true range), running EMA (exponential moving average), etc. With the neural network approach, I will just need to squish the numbers down to form a pattern between -1 and 1 (where 0 is the current market price) and run directly against those numbers.

Because neural networks are a lot more “touchy-feely” than the current EMA/stop-gain/stop-loss approach I’m using, I will be able to “hone in” on good values – something I cannot do at the moment because of the step-like nature of integers.

I won’t be using any off-the-shell networks, as I can’t imagine how to write a good trainer for them. Instead, I’ll write my own, using a cascade-correlation approach.

Cascade-correlation is an approach to neural networks which allows a network to “grow” and gradually learn features of the input data. You train a network layer against every single input, until the output stops improving. You then “freeze” that network layer so is not adjusted anymore. You then create a second layer which trains against every single input and the previously trained network. You continue adding layers until there is no more noticeable improvement.

The point of this is that the first training layer will notice a feature in the data that produces a good result, and will train itself to recognise that feature very well. The second layer will then be able to ignore that feature (because it’s already being checked for), and find another one that improves the results. It’s like how you decide what animal is in a picture – does it have 6 legs (level one), is it red (level two), does it have a stinger (level three) – it’s a scorpion! Instead of trying to learn all the features of a successful sale at once, the algorithm picks up a new one at each level.

Around Christmas, I was playing with the FANN version of cascade correlation, and I think it’s very limited. It appears to create its new levels based on all inputs, but only the last feature-detection layer. Using the above example, this would make it difficult to recognise a black scorpion, as it would not be red. I believe that ideally, each feature layer should be treated as a separate new input, letting the end output make decisions based on multiple parallel features, not a single linear feature decision.

using LIMIT trades to reduce fees

I made two enhancements to my Litecoin market script over the weekend.

The first one was based on something I did last week, where it occured to me that if you buy when the ATR is low, you will probably lose money through fees because the market value will not shift enough for there to be a profit.

For example, if the value is 200 when you buy and 201 when you sell, then you will lose money. Let’s say you you €10 of coins. It will increase in value to €10.05 by the time you sell them, but you will pay just over €0.06 in fees.

An ATR limit large enough to discourage the script from buying when the margin is so small would stop this issue.

However, I realised that a limit that worked at around a value of 200 would not be effective at 400.

The solution was to make a new limit which is inversely proportional to the market value. Let’s say the number is 50. Then it would look for an ATR of 0.25 if the market value was 200, and an ATR of 0.125 if the value was 400.

This made a remarkable difference in my simulation. It now estimates a 3.15% return per day based on the configuration figures I found.

Last week’s version ended up with about 14 buys in a 50 day period, which meant that there was only about one buy every 4 days, making it look like it wasn’t doing anything.

Now, it has what looks like about 32 events per day. A lot of them are repeats, where a sell signal might pop up from a chandelier exit followed by another from exponential moving average or simple moving average, but it’s still a lot more, making the script feel a lot more alive.

This is helped by me changing the trade method.

I had it on MARKET trades, which are virtually guaranteed sales/buys, but also are guaranteed 0.3% fees.

I’ve changed that to LIMIT trades that work in a way that might not trade at exactly what was requested, but should not trigger a fee at all (at least, I haven’t had a single fee yet in my tests!).

How it works: let’s say the market value is 200 right now. The script will check the order books, which might currently have an “ask” of 200.01 and a “bid” of 199.99 (for example). If we are trying to sell, then we add an sell/ask order of 200.02 (current+0.01). If the market value goes down, then we cancel that order and create a new one based on whatever the new value is.

And vice versa with buys/bids.

This means we probably won’t get exactly what we want, because we are relying on market jitter to make the trade happen. But at least we won’t have a fee to worry about!

using ATR to restrict market purchases

After re-checking the simulations I’d done recently, I realised I had the wrong broker fee marked in, which meant that even though the simulation said I was making a 1% return every day, I was actually losing money in reality.

The script was originally making LIMIT buys and sells, which my experience showed rarely caused a fee, so I’d marked that as a 0.1% fee in the simulation vs the normal 0.3% that a MARKET trade has. But the simulation didn’t take into account that LIMIT trades sometimes don’t get filled. The market might be at 147, for example, so you try to sell your 4 Litecoin (eg) for 147, but by the time you get the order uploaded, the market might have shifted down. Now you’re stuck with an unfulfilled order.

To solve this, I changed the script to only use MARKET trades, which are guaranteed to sell/buy, but also are guaranteed to incur a fee of 0.3%.

When I plugged 0.3% into my simulation, I was suddenly not making any money at all. Instead, I was losing money. In fact, the simulation showed I was losing money so badly that if I started with €1,000,000 46 days ago, I would be down to €14 today. That’s bad.

Looking at a 6hr chart of the Litecoin market, we can see where the money vanished.

On the 6hr chart, the highlighted area looks very flat, like nothing is happening there. So you would not expect any trade to happen. But if you zoom into the area, you can see that trades are still happening, even though they value isn’t changed very much in absolute terms.

Let’s zoom in even further, to the area that’s highlighted:

You can see that the value is rising and falling vigorously, but this is an illusion. If you look at the figures on the right, you see that the LTC is oscillating between about €142 per coin to €150 per coin. That’s about a 5% range. Remember that the fee we’re trying to avoid is 0.3%, so this /might/ be okay to trade with.

But the script trades minute by minute, not hour by hour. So let’s look at what the market does in that range by zooming in even further:

That’s one hour of data. In the preceding image, the above image is what’s contained in the single bar all the way on the right. In it, the price of Litecoin rises steadily from €144 to €146.

Worth buying?

144 to 146 is less than 1.4% of a rise. If you buy €100 worth of LTC at the beginning of this rise, and sell it at the end, it will cost you €0.30 to buy (so you’ll actually spend €100.30), the value will increase from €100 to €101.39, and then when you sell, there will be a fee of €0.31.

So even though the price went up, from €100 to €101.39, you will only have made €0.78.

This is still a profit, but if the rise was less, it might have been a loss.

Example, let’s say it was 200 to 201

You invest €100 and pay your €0.30 fee. The value increases to €100.50. You then cash out. Now you’ve just lost money, because the fees add up to €0.61, but the price increase was only €0.50.

Buying when the increase is so small can be dangerous, because if the increase isn’t enough, then you will lose money.

So how to solve this?

The problem has to do with the volatility of the market. For the last few days in Litecoin land, there pretty-much hasn’t been any!

After realising the above, I made a small adjustment to my script so that it refused to allow any trades at all if the volatility was too small. I did this by measuring the ATR (average true range), and if it was below a certain range (1.5 or so), then even if the script signalled a buy or sell, it was stopped in its tracks.

This had an immediate and amazing effect on the returns.

Beforehand, with a 0.1% fee, I was getting a 1.25% return daily in my simulations, but now I’m getting a 2.8% return daily on a 0.3% fee. That’s HUGE! In money terms, if I invested €20 46 days ago on the 9th of December (chosen by finding the oldest data point which had a value higher than the most recent data point), then that €20 would now mbe worth €73.80.

You can check these figures for yourself.

Here’s the list of buys and sells that it came up with for that time. Compare that with the GDAX charts to see where it was making its decisions:

Date Total Holdings Euro Litecoin Decision
2017-12-09 06:16:00 20 20 0 sell (EMA)
2017-12-09 18:19:00 19.94063602112 0.15264306112 0.1467952 buy (SMA)
2017-12-12 03:31:00 29.367424251546 29.068528237546 0.0014974 sell (EMA)
2017-12-12 03:48:00 29.304203063656 0.221864459656 0.1352604 buy (EMA)
2017-12-12 15:14:00 38.521751703256 38.129916903256 0.0013797 sell (EMA)
2017-12-12 15:31:00 38.458243719256 0.291011719256 0.1192726 buy (EMA)
2017-12-12 16:15:00 35.489678406936 35.129576972936 0.0012166 sell (EMA)
2017-12-13 19:09:00 35.340416940661 0.268105949661 0.1354091 buy (SMA)
2017-12-19 21:14:00 39.434991448454 39.034291516454 0.0013812 sell (Chandelier exit)
2017-12-20 07:04:00 39.299779309244 0.297920919244 0.1412599 buy (SMA)
2017-12-20 13:56:00 40.357168121734 39.947332934734 0.0014409 sell (SMA)
2017-12-22 04:12:00 40.175830888054 0.304884745054 0.1655289 buy (SMA)
2017-12-22 05:22:00 38.593556557054 38.201847757054 0.0016884 sell (EMA)
2017-12-22 07:54:00 38.430341158372 0.291552148372 0.188349 buy (SMA)
2017-12-22 08:14:00 35.965964610874 35.600994246874 0.0019212 sell (EMA)
2017-12-22 08:37:00 35.881484581174 0.271701481174 0.1771631 buy (EMA)
2017-12-22 11:24:00 39.402103180134 39.001776317134 0.0018071 sell (EMA)
2017-12-22 15:15:00 39.223938974334 0.297663674334 0.2081619 buy (SMA)
2017-12-22 23:36:00 49.655726432778 49.150763226778 0.0021233 sell (EMA)
2017-12-24 03:46:00 49.523277677268 0.375125812268 0.2013031 buy (SMA)
2017-12-24 03:53:00 48.586558079894 48.093334886894 0.0020533 sell (EMA)
2017-12-24 12:22:00 48.422843305894 0.367057305894 0.2089382 buy (SMA)
2017-12-25 02:34:00 51.421578583214 50.899264087214 0.0021312 sell (Chandelier exit)
2017-12-28 19:25:00 51.208545412331 0.388455939331 0.2352673 buy (SMA)
2018-01-06 05:27:00 53.762217368481 53.216166876481 0.0023998 sell (SMA)
2018-01-16 22:55:00 53.395286595271 0.406147617271 0.3772543 buy (SMA)
2018-01-17 01:04:00 58.597851304252 58.002527224252 0.003848 sell (EMA)
2018-01-17 16:11:00 58.311595018118 0.442668760118 0.4627293 buy (SMA)
2018-01-17 22:56:00 73.798156976318 73.047692876318 0.0047199 sell (EMA)


Notice that the script has not made any decisions in the last few days (today’s the 25th). That’s because the last few days, there’s been nothing interesting happening in the market so it’s holding on to what it has.

Here’s a pretty picture showing the above in line format

automated script for the Litecoin market

Over Christmas, I started looking into how stock markets work and decided to give it a shot. The simplest way I found to start off on it was actually the cryptocoin market. I chose Coinbase’s GDAX market as the one I’d work on.

At first, I had a naive idea that I just needed to watch the numbers on the market for times when the latest figure is the lowest in a long time. Then you buy. And then the opposite happens -if the latest bar is the highest in a long time, sell.

It turns out that doesn’t work. I wrote a testing application that downloaded 6 months of per-minute data about the LTC-EUR market and ran simulations against it to figure out what would happen if I was to trade based on my algorithms. The first one (above) sucked.

So I started looking a bit further into how traders actually do it themselves.

It turns out it’s pretty simple, if you’re willing to put the testing time in and come up with some good configuration numbers.

The first thing I checked out was called “MACD” (Moving Average Convergence Divergence). That uses a simple moving average (SMA) of the market value to generate two lines – a “long” average based on 26 figures, and a “short” average based on 12 figures. When the short average crosses over the long, it signals an action. For example, if the current short value is higher than the current long, and the last calculations were the opposite (short under long), then that indicates you should Buy, because it looks like there is an upwards trend. The opposite happens when the crossover shows the short going under the long. Sell.

The 12 and 26 figures are traditional. You could work based on them, but my tests showed that there are different figures that can give you better results. I guess it depends on the market. My current settings here are 25/43 instead of 12/26.

The next thing I worked on was a “Chandelier Exit”. This is a strategy for cutting your losses when the market suddenly drops more than usual. To do this, you measure the “ATR” (average true range) for the last n periods (traditionally 22). You then multiply the ATR by a volatility value (traditionally 3), subtract that from the current High value, and if the current market value is below that, Sell. My current values for this are a volatility of 5.59 based on an ATR of 18 bars.

I then looked at exponential moving average based MACD. The standard moving average is a straightforward average of n numbers, but the EMA puts more weight on the more recent numbers, so it reacts quicker to changes in the market.

After trying to tune the EMA for a while, I found that if I use EMA instead of SMA, then I get worse results, because the script would buy quickly when it saw an upward trend, but that might turn out to be just jitter and you’ll lose it all immediately afterwards. It’s safe to sell when the market drops, it’s not safe to buy when the market looks like it’s just starting to rise. it’s better to take your time.

So, I added a switch to my code so that I could decide whether to use SMA or EMA for buys and sells, etc.

I found that the combination that gives the best results uses only SMA for buys, but then uses all of SMA, EMA and Chandelier exits to signal a sell. Oh – EMA of 40 and 80.

Doing this, I’ve been able to come up with a configuration of my script that gives an average return of about 1.1%. This means that if you were to invest €5000, then there would be “interest” of about €55 per day. Or if you can keep your money in the game, it starts to grow. €50 invested for 365 days at 1.1% interest per day is €2711.

If you’re interested in going the script a shot, you can download it from here.

I keep on having more ideas on how to improve this.

salepredict3: automated test results

Based on a suggestion by Ché Lucero (LinkedIn), I wrote a test to see exactly how accurate this machine is.

I had 41 domains already entered into the engine and categorised as Sale or Fail, so the test was based on those.

For each of the domains, the test:

  1. changed the domain’s type from sale/fail to prospect
  2. retrained the neural net using the rest of the domains as its reference data
  3. calculated how much of a match the domain was to a sale using that neural net
  4. if the calculation indicated correctly a sale or a fail, then that counted as a correct test
  5. finally, clean up – reset the domain’s type back to sale/fail, ready for the next test

After 41 tests, it got 27 correct – an accuracy of 65.85%. That’s much more than chance (50%).

I’m going to get some more data now, but I expect it will only improve the value, not decrease it.

What does this mean for your own business?

Well, let’s say you have 100 companies you can potentially sell to, and you expect that 50% of them might end up being a waste of time, but you still need to spend about 2 hours on each in order to find that out.

Without using my engine, after 100 hours of selling, you will have made 25 sales. (100 hours is 50 companies. 50% success rate so 50/2 = 25).

With my engine, after 100 hours of selling, you will have made 33 sales, because it will have pre-ordered the companies and got it 66% correct, so in the first 50 companies, it will have correctly placed 66% of all successful sales.

salepredict2: we are live

I’ve finished the base engine of the Sale Predict project. If you go to and fill in some of your sales and fails, then it will be able to predict the chance of success for any prospective jobs you have.

For example, let’s say you have successfully sold to 25 companies before, and 25 other companies have turned you down. Let’s say you also have a list of a further 25 companies that you want to approach, but because each of these takes a few hours of research and negotiation, you would prefer to work on them in order of which are most like companies that you have already sold to.

All you need to do is put in the 25 sales and fails, and a neural network will be automatically trained up based on that data, which will then be able to analyse the 25 prospects that you have.

The engine currently accepts logins via LinkedIn and Facebook. I will add more.

You are given 100 free credits as soon as you login. This lets you test it out to see if it works for you. I will add a payment method shortly for increasing the number of available credits. 100 should be enough for anyone to realise how effective this is.

I’m working on an automated test at the moment to figure out exactly how successful the engine actually is. The test works by taking a list of sales and fails, then doing a round of tests on each of those websites, temporarily changing the website to a “prospect” so the system does not know if it was a sale or a fail, then retraining the network on the other domains, and seeing if it accurately predicts the original value (sale or fail) for the test domain. This will take a while to run, so I’ll post the results in the next article.

salepredict1: the Sale Predict project

Around April of this year, I had an idea that I wanted to pursue. I felt it would make an important difference to our business (FieldMotion). The itch was so strong that one weekend, I set up a server and wrote a prototype of the idea in my own time. It worked perfectly. But, it solved a problem that we weren’t interested in anymore, so the work I did on it was mostly wasted. A by-product of it become some useful information, but not the main part of it.

Okay – let’s look at the problem.

Let’s say you’re a business that is trying to expand. You get your work by contacting other businesses that you think may need your product, and trying to get them to work with you. Cold-calling, or trying to arrange a meeting through mutual friends, etc.

The old way to do this would be to get a phone directory, find a list of companies in an industry that you think is right, and just start calling, working each number one by one until you find one that sticks.

But this is usually a waste of time. Either the prospects already have a solution, have no interest, or are too dissimilar to those you’ve sold to before so you can’t establish a common ground.

The problem, shortened, is this: How can you take a long list of potential clients, and order them so that those most likely to buy from you are first in the list?

A solution to this came to me earlier this year. You need to find companies that are similar to those that you have already signed with, but that are not similar to companies you failed to sign with. This is a top-level description, obviously. The technical details of how to measure similarity are beyond this article.

To do this, I wrote a program that takes three lists of domain names:

  1. domain names of companies that you have done business with.
  2. domain names of companies that you cannot do business with (either they’re too unsuitable for your work, or they just said No for any reason).
  3. a long list of domain names of companies that you want to figure out what order in which to call them.

The program reads the front page and all pages linked to the front page of all mentioned domain names, extracts words and “n-grams” (groups of words), and figures out using a neural network what kind of language is used by companies that you usually sell to.

After this, it can then come up with individual numerical scores of how suitable each prospective company is.

I ran this on a list of about 50,000 companies as a test back in April, to see what it would say about my own company’s chances with those prospects. In the top 10, it named a company that we had actually talked to a few years before and that had said they would go with us except we were too young at the time. In the bottom 10, it listed a charity shop, which is totally not our target audience. The thing worked!

But, we don’t work in that way anymore, so it turns out that the list generated by the machine was never used. Oh well.

This week, I’ve decided to revive it and make it generally available. So this weekend, I will work on a simple website to make it possible to generate your own domain lists. It will allow a list of, say, 50 domains free, but anything beyond that will cost. Hosting costs money, and this uses a lot of heavy computation.

dehumidifier 3: hot-plate hiccough

Yesterday, I was hoping to continue work on my dehumidifier project; using a desiccant wheel to adsorb water onto silica gel balls on one side of a machine, and hot air to evaporate water from the balls (regenerate them) on the other side.

The hot-plate arrived for the machine. I tested it, using an old 12V adaptor for power supply. It works well – heats up very quickly. I don’t know what its limits are, but I feel that 105°C is well within its capabilities.

But instead of then working on the machine itself, I spent a few hours making space on my son’s laptop and then installing the Unity development platform. I’ve been trying to get him away from Scratch and onto something more practical, because Scratch is nice, but it’s a dead end.

You won’t find people designing grown-up programs in Scratch, because it simply doesn’t have the capabilities. Database access, complex graphics, file manipulation.

But Unity does, because it binds naturally to some languages that you can then use elsewhere. In this case, C#. But, beggars can’t be choosers!

I bought him a book on how to start coding in C# by creating a game in Unity. With the book, you build a side-scroller game. I really hope he likes it. More than that, I hope the book is not obsolete already!

On the desiccant wheel project, I realised there is a really bad problem – in order to regenerate, the silica gel balls must be baked at more than 100°C in order to let the water evaporate.

PLA (the plastic I print in at the moment) melts at 180°C, but its glass-transition temperate (its Tg) is between 65°C and 70°C. That means that if I have a section of my machine which is around 105°C, then the plastic there may warp.

While this is a real problem, I don’t think it’s insurmountable. The first thing I will do is to just try it as if it will all just work out fine. You never know! And if it turns out there is a problem, I will come up with a solution. That’s what I do.

The biggest warping problem will be the grill on the inner-facing part of the desiccant wheel (green in the image), which keeps the silica gel ball bags from falling out. If that warps, then it will quickly jam the wheel from turning. A redesign of the wheel is major. It would involve changing how the wheel is turned, and probably changing the orientation of the entire machine.

The way I have it at the moment, the wheel rotates on-edge, with two big circles with small air-holes in them, to allow air into the silica gel balls contained inside. If I was to change the orientation so the wheel is held flat with the green grill facing upwards, then the grill would not be needed at all, and the hot air could be blown directly onto the bags themselves.

This is a major change to the design, though, because I would then need to change how to wheel is balanced (currently two ball-bearings on the edge) and how the wheel is rotated in the first place (currently a gearbox held against the grill).

A possible solution is to move the gearbox underneath the flat wheel. Hmm… Yeah, I think that’s actually a good solution. I have a plan now.

plantbox project 1: the train

I had an idea a year or two ago of a small train that travels along a track that leads along a wall of plants, each enclosed completely in a box so that water could not get in. This way the plants would not get overwatered. The train would then water the plants to their individually needs.

I had envisaged a sort of water-carrying trailer. The main carriage would couple onto some exposed wires in the track coming from a box it was passing, and use that to detect moisture in the soil of the box. Depending on that, the trailer might then tip its load into a hole in the side of the box, and the train would then return to its depot for a recharge and for a reload of water.

Yesterday I had an easier idea. A small aquaduct would travel along the top of the boxes, which would keep a container on each box topped-up with water. When the train detected dry soil, it would tip the containers over. The containers would be counterbalanced so that when empty, they go back up to the top of the box to refill, and in the top position, they would stay where they are unless physically pushed over-balance. This is a much simpler arrangement, I think, as the train is then just two motors and some electronics. One motor to drive the thing forwards/backwards, and the other motor to swing a hammer.

I’ve designed the basic shape of the thing to test out how the movement would work and to decide how to fit the electronics and battery. After 3D printing it with my Anet A8, the motors fit very snugly. The front wheels (with the square axle holes) fit perfectly over the motor axles, and the back wheels (round axles) are almost perfect – a little drilling needed to expand the axle holes slightly.

I think the way I’ll attach the electronics is to add some catches on the back part of the fuselage (where the round axle wheels are) so that a little 3D printed box of electronics can be snapped into place over it. This way if I change the design in future, I don’t need to reprint everything.

The boxes that the plants go into will be completely enclosed in transparent plastic, protecting from the environment and acting as a mini greenhouse. The boxes will all have small water containers which are counterbalanced so that that can be tipped over to empty, and then when they right themselves, they start filling up again. I have not decided yet on the mechanism of refilling. Maybe a ballcock mechanism that is automatically lowered into the water container when it is in position?

new printer progress

What are 3D printers for? Printing new 3D printers! I’ve finished printing the CoreXY parts for my new printer. CoreXY is the method I’m using to control the X and Y axes (left/right, forward/backwards), using parts designed by Louis Zatak who put them on Thingiverse. He also provided parts for Z axis bed lift as well , using the same belt trick that the CoreXY uses.

The dimensions I’m printing are probably much too large – I went for 50cm cubed, which might end up with a print volume of about 44cm cubed, which is 8 times larger than my current printer (an Anet A8).

There will be problems, I’m sure – with 50cm rods, the x-carriage will probably droop near the middle. I have an idea how to solve that if it happens, but we’ll see if it’s necessary!

All that’s left to get for this new printer are some rods for the x-carriage, the y axis, and the z-axis, and a hot-bed. The hot-bed is not strictly necessary, but it will make it work much better. without a hot-bed, prints might curl upwards, and will probably have trouble sticking to the print bed.

After this is finished, I have a plan to build a brief-case sized foldable 3D printer.

The CoreXY design is compact and may be foldable, so I’m going to try find a way to have the z-axis fold upwards towards the CoreXY frame, and then fold the arms of the mechanism inwards. In my head, it works…