Gooee IoT Sensing ASIC
The eye of Gooee’s ecosystem is a custom engineered sensing ASIC, designed to capture environmental data and human activity, along with the capability to monitor LED chip performance. Facing downwards – the sensors detect motion, direction, footfall and ambient light levels and temperature. Upwards – the sensors monitor light output, color temperature, quality and operating temperatures. Furthermore, they calculate the luminaire’s life period since its beginning, facilitating extended warranty opportunities. Gooee has an ability to integrate its own external sensors or those of a 3rd party, enhancing sensing capabilities for multiple applications . An onboard processor can locally analyze the data it receives, greatly reducing the amount of data sent back to the Gateway. Smart algorithms determine whether collected need to be uploaded at once or at another moment when the network is less occupied / idle. This speeds up local response times to changing conditions and is updatable over-the-air to improve performance. All this is packaged into a 25 millimetres squared chip and integrated into a variety of luminaires and remote modules. We call this… nSense.
How AI is Decommoditizing the Chip Industry
by Daniel Kobran, Paperspace
Since the early days of computing, there has always been this idea that artificial intelligence would one day change the world. We’ve seen this future depicted in countless pop culture references and by futurist thinkers for decades, yet the technology itself remained elusive. Incremental progress was mostly relegated to fringe academic circles and expendable corporate research departments.
That all changed five years ago. With the advent of modern deep learning, we’ve seen a real glimpse of this technology in action: Computers are beginning to see, hear, and talk. For the first time, AI feels tangible and within reach.
AI development today is centered around Deep Learning algorithms like convolutional networks, recurrent networks, generative adversarial networks, reinforcement learning, capsule nets, and others. The one thing all of these have in common is they take an enormous amount of computing power. To make real progress towards generalizing this kind of intelligence, we need to overhaul the computational systems that fuel this technology.
The 2009 discovery of the GPU as a compute device is often viewed as a critical juncture that helped usher in the Cambrian explosion around deep learning. Since then, the investment in parallel compute architectures has exploded. The excitement around Google’s TPU (Tensor Processing Unit) is a case in point, but the TPU is really just the beginning. New dedicated AI chip startups raised $1.5 billion in 2017 alone, a CB Insights spokesperson told my team. This is astonishing.
We’re already seeing new startups enter the scene, challenging incumbents like Intel, AMD, Nvidia, Microsoft, Qualcomm, Google, and IBM. Emerging companies like Graphcore, Nervana, Cerebras, Groq, Vathys, Cambricon, SambaNova Systems, and Wave Computing are some of the rising stars paving the way for a future powered by deep learning. Though these startups are certainly well funded, these are early days and we have yet to see who the winners will be and what will come of the old guard.
Nvidia brought GPUs into the mainstream as alternatives for AI and deep learning. The company’s calculated transition from a leader in consumer gaming to an AI chip company has been nothing short of brilliant. Moves like its $3 billion investment in Volta and deep learning software libraries like CUDA/cuDNN catapulted it from a leading position to total market dominance. Last year, its stock went through the roof, CEO Jensen Huang was named Businessperson of the Year by Fortune, and it gained a reputation as the “new Intel.”
But while Nvidia may look completely different on the outside, it’s still just churning out the same graphics cards it has been making for decades. But the future of GPUs as a technology for AI is uncertain. Critics argue that GPUs are packed with 20 years of cruft that is unfit for deep learning. GPUs are generic devices that can support a range of applications, including everything from physics simulations to cinematic rendering. And let’s not forget that the first use of GPUs in deep learning back in 2009 was essentially a hack.
The rise of ASICs
Companies attacking the chip market are making the case that AI will perform lightyears faster on specialized silicon. The most likely candidate is ASICs (application-specific integrated circuit), which can be highly optimized to perform a specific task.
If you think about chips as a progression from generic to specialized, the spectrum includes CPUs on the one side, then GPUs and FPGAs in the middle, and then ASICs at the other extreme.
CPUs are very efficient at performing highly-complex operations — essentially the opposite of the specific type of math that underpins deep learning training and inference. The new entrants are betting on ASICs because they can be designed at the chip level to handle a high volume of simple tasks. The board can be dedicated to a set of narrow functions — in this case, sparse matrix multiplication, with a high degree of parallelism. Even FPGAs, which are designed to be programmable and thus slightly more generalized, are hindered by their implicit versatility.
The performance speedup of dedicated AI chips is evident. So what does this mean for the broader technology landscape?
The future is decommoditized
GPUs are already not commoditized relative to CPUs, and what we’re seeing with the huge surge of investment in AI chips is that GPUs will ultimately be replaced by something even more specialized. There is a bit of irony here considering Nvidia came into existence with the premise that Intel’s x86 CPU technology was too generalized to meet the growing demand for graphics intensive applications. This time, neither Intel nor Nvidia are going to sit on the sidelines and let startups devour this new market. The opportunity is too great.
The likely scenario is that we’ll see Nvidia and Intel continue to invest heavily in Volta and Nervana (as well as their successors). AMD has been struggling due to interoperability issues (see software section below) but will most likely come up with something usable soon. Microsoft and Google are making moves with Brainwave and the TPU, and a host of other projects; and then there are all the startups. The list seems to grow weekly, and you’d be hard-pressed to find a venture capital fund that hasn’t made a sizable bet on at least one of the players.
Another wrinkle in the chip space is edge computing, where inference is computed directly on devices as opposed to in-cloud environments or company data centers. Models can be deployed directly on the edge to satisfy low-latency requirements (mobile) or make predictions on low-powered, intermittently-connected devices (embedded, IoT). There have been several announcements recently about edge-based AI accelerators, such as Google’s Edge TPU.
Open questions about the future
Perhaps the most significant challenge facing any newcomer in the chip space is surprisingly not hardware — it’s software. Nvidia has a stranglehold on the market with CUDA/cuDNN, which are software libraries that form a necessary abstraction layer that sits on top of the chip, enabling frameworks like TensorFlow and PyTorch to run without writing complex, low-level instructions. Without these high-level libraries, chips in general are difficult to target from a code perspective.
The problem is, CUDA and cuDNN are not open source. They are proprietary packages that can only run on Nvidia hardware. Before developers can leverage an ASIC, the provider needs to first find a new way to make their chip easily accessible to frameworks. Without this, there won’t be significant (if any) adoption by developers — developers will just stick with Nvidia because it works. There needs to be an open source equivalent to CUDA/cuDNN or frameworks that will need to be ported to target specific ASICs, like what Google did with the TPU and TensorFlow. This is a huge barrier without an obvious solution.
What does this all mean?
At least in the short term, we’ll see a plethora of chips, some competing directly against each other and others that focus on particular aspects of training and inference. What this means for the industry is that developers will have options, lots of them. Unlike the CPU market which is heavily commoditized, the industry looks like it’s headed towards a more diverse, heterogeneous, and application-specific future.
While we don’t know what the specific outcome will be, one thing is certain: The future of AI lies in purpose built ASICs — not commodity hardware.
Daniel Kobran is cofounder of GPU cloud platform Paperspace.
Mining 101: A Step-by-Step Guide to Starting Your Own Profitable Cryptocurrency Mining Operation
Via Luxor Mining
Understanding Cryptocurrency Mining
What’s PoW Cryptocurrency Mining?
Mining cryptocurrencies has two main functions: adding new verified transactions to the blockchain digital ledger and issuing new coins.
Each time a cryptocurrency transaction is made, miners compete to secure and verify the operation. To do so, miners have to solve a complicated mathematical problem that adjusts its difficulty to ensure a stable issuance rate. Once the miner finds a hash — the output of a hash function — that is lower than the target defined by the blockchain protocol, the miner is entitled to receive the block reward. This block reward is a coin of the cryptocurrency being mined.
As more miners join a network, the mining difficulty rises, making it more difficult to be the block finder. This increase in complexity makes the process of mining time consuming and extremely energy intensive. Because of this, miners are always looking for the cheapest electricity rates and the most efficient mining hardware.
Why Should You Mine Cryptocurrency?
Mining cryptocurrency provides the miner with three key benefits: the cryptocurrency reward, transactional freedom, and the unique functionality of the crypto that they mined.
Miners are probabilistically expected to obtain cryptocurrency when they expend hashing power toward the cryptocurrency’s blockchain. Like other financial assets, this reward holds value on the open market. Recently, cryptos that are sufficiently liquid are being used as a method of payment.
Cut Out the Middleman
Mining cryptocurrency enables transactional freedom by removing the need for intermediaries. By definition, blockchain technology allows cryptocurrency to be decentralized because it is governed by the entities mining and transacting with it rather than government banks and financial intermediaries. Therefore, cryptocurrency transactions allow consumers to avoid fees associated with intermediaries, like PayPal or contract attorneys, and currency value alternation caused by monetary policy.
Miners can achieve the unique functionality of the cryptocurrency that they are mining. For example, Monero offers a discrete currency that ensures that others cannot see your balances or track your activity. On another hand, LBRY enables a content sharing and publishing platform that is owned by its users instead of a third party.
Cost of Mining
Mining cryptocurrency does come with its trade-offs: the cost of electricity and hardware, and diseconomies of scale.
Mining cryptocurrency requires a large volume of electricity due to the computational intensiveness of the task. While electricity costs vary drastically across geographies, researchers at Bitcoinist assert that miners are most profitable when they locate their mining hardware in low-cost regions such as Venezuela or Eastern Europe, a task that can be difficult given the shortage of mining farms in said regions.
Adding to the economic strain, the upfront cost of hardware is not cheap. Investing in top-end cryptocurrency miners can cost as much as $5000 and take numerous months to break even, dependent on the amount of hash power on the network, prevailing market prices, and electricity costs, which may not be within everyone’s budget.
Lastly, cryptocurrency miners face diseconomies of scale. As additional miners begin to mine a specific cryptocurrency and the finite quantity of remaining blocks decreases, the expected payout per volume of mining hash power expended decreases. Moreover, as increasing demand for electricity caused by increased uptake of mining and societal energy consumption fuel upward pressure on electricity costs, profit margins of miners are at risk of decreasing further over time.
Understanding How You Can Mine
What Hardware Do I Need to Start Mining?
Mining cryptocurrency requires one to purchase specialized hardware, two types of which exist: graphics processing units (GPUs) and application specific integrated circuits (ASICs).
Option 1: Choosing a GPU
Mining a coin requires the miner to use an algorithm that corresponds to that unique coin. The degree of mining efficiency that a GPU exhibits when mining a certain algorithm is based on the GPU’s specs. Therefore, a GPU miner should choose their hardware based on the type of algorithm that they want to mine with. Thereafter, one can select the hardware based on the degree of brand reliability, power consumption, and price. For example, AMDs come well recommended when mining with cryptonight and cryptonight-heavy algorithms for coins such as Monero and Loki, whereas NVIDIAs are noteworthy with respects to Equihash and Ethash algorithms used for Zcash, Zencash, and Ethereum.
Custom GPU Rig
For individuals looking to build a custom GPU rig, they must consider a number of components, including the graphics processing unit (GPU), power supply unit (PSU), motherboard, risers, and rig frame. Additional elements will lengthen one’s breakeven duration. A detailed configuration description can be found in this article from our friends over at Coin Central.
For those with limited hardware knowledge or time to build a GPU, they can opt to purchase a pre-built mining rig for a slightly higher price. Some pre-built rig manufacturers provide a six to twelve-month warranty, and there are several noteworthy pre-built rigs worth considering.
Option 2: Choosing an ASIC
An ASIC is a machine that is designed to mine a select type of cryptocurrency and algo. There exists a copious quantity of ASICs, with the amount of options generally correlated to the demand for a particular coin. Though Bitmain manufactures the majority, competitors such as Innosilicon and Halong are looking to expand their footprint in the space with new ASIC releases.
Know Your End Goal
When choosing an ASIC, you must first decide your objective: to obtain a specific type of coin or to opportunistically purchase machines to capitalize on disproportionately high profits.
If you are obtaining an ASIC to mine for a specific type of coin, you would begin by choosing the coin and then identifying what algorithm is used to mine the coin. For coins’ algorithms that are not supported by an ASIC manufacturer, one’s only option is to mine with a GPU.
Once you outline all the potential ASICs that are available for the select coin, you can compare them. Key metrics of comparison include hash rate (revenue potential) relative to electricity usage (variable cost), initial equipment cost, and the date by which you would be able to receive the ASIC (a function of release date and shipping speed).
Some speculators purchase an ASIC to earn higher profits that are generally realized by the first-movers on new ASICs. Typically, the most profitable machines with the shortest breakeven period are new releases since they can out-hash incumbent GPUs. Other opportunists seek to scoop up used or old machines for a discounted price. If this is done during a bear market when some miners find it unprofitable to mine yet the market does rebound, the owners of the machines may be able to receive a return on their investment despite their ASIC’s relatively low hash rate-to-electricity cost.
Before You Buy
Regardless of one’s intention when purchasing an ASIC, it is essential to be wary of potential sticking points. First of these is the miner’s projected breakeven period based on hash rate relative to electricity cost. Another is the probability of whether another manufacturer will release an ASIC to mine the same type of coin in the near term, as the higher competition will decrease your current ASIC’s profitability due to increased network difficulty.
Key Differences Between GPU and ASIC Machines
On a high level, the following factors differentiate GPU miners from their ASIC counterparts:
- Flexibility in mining different coins: GPUs can mine multiple algorithms and coins, whereas ASICs are made to mine a single coin or algorithm. This means that GPUs can be adjusted to mine whichever coin is most profitable at a given point in time, whereas ASICs can become a cash-burner or paperweight in the case that mining their respective coin becomes unprofitable.
- Risk-return preference: While GPUs offer additional flexibility, the lower risk generally comes with lower returns relative to an ASIC when mining the same coin under the same market conditions.
- Re-sell value: GPUs tend to have a higher re-sell value as they can be used to mine multiple coins and their hardware can be reused for computational purposes unrelated to mining cryptocurrency. ASICs on the other hand generally depreciate at a quicker rate as they lose their lucrative profitability in a short period of time.
- Ability to host the machine locally: Another consideration between choosing GPU mining or ASIC mining is your ability to host your machines locally at your home or office. While many GPU miners set it up in their home, ASIC machines in bulk are generally intrusive devices in the household as they require significantly more power, create loud noise, and emit more heat.
How to Choose a Mining Pool?
At its core, mining is the process by which computational power (hash power) is used in an attempt to unlock a block in a blockchain. Each block unlocked provides a reward. The more attempts (hashes) you can perform per second, the higher the probability that you will obtain the reward. Because miners by themselves typically don’t have enough hashing power to frequently find blocks, they join a pool that combines the hash power of multiple miners to hash blocks.
With greater collective hashing power, it is easier to find blocks with decreased variance. The reward is split among the miners relative to the proportion of hash power that they contributed to the pool. The pool operator collects a small service fee. Choosing the right mining pool is crucial for the efficiency of a mining operation.
What to Look for When Choosing a Mining Pool:
- Payout structure of preference: A pool can payout rewards to its miners based on a Pay-Per-Share or Pay-Per-Last-N-Shares basis, two methods with their own costs and benefits.
- Team trustworthiness and pool reliability: Given the lack of governance and infancy in the crypto space, it is vulnerable to unethical practices by service providers. This danger increases the importance of choosing service providers that have credible development teams.
- Extra features: Quality pools generally offer enhanced UI and UX design, equipping users with functionality that allows them to monitor their performance.
- Fee: Generally, service fees range from 1–3% and exist to pay for the pool’s operating expenses and the interface functionality provided to users.
Alternative to Mining on a Pool: Selling Hashing Power
On pools, miners use their own hashing power. The coins they mine get deposited into their wallet where they can then trade them for another coin or hold on to them.
A growing user base is opting to sell their hashing power rather than use it to mine on pools. This transition is made possible with hash exchanges such as NiceHash and Genesis Mining, both of which are platforms that enable users to buy and sell hash power while taking charging a service fee. On NiceHash, one can use any ASIC or GPU to supply hashing power. In return for selling your hashing power, you receive BTC. This structure allows GPU miners and people with machines that mine specific algorithms that are not BTC to earn a return in BTC.
If you want to own BTC and don’t have an S9, T1, or other BTC miner, then selling you hashing power makes sense. However, if you are looking to hold your coins, typically the most profitable way to mine would be to join a pool with its lower fees.
Configuring Your Mining Machine
Once you purchase your ASIC or GPU miner and choose whether you want to direct the hash power toward a pool or an exchange, your last step is to configure your miner. A simple sequence of steps are followed to do this:
- Power on the miner by plugging it into a power outlet
- Connect the miner to the internet via an ethernet cable
- Find your miner IP via your router or IP scanner software
- Input three parameters (check your pool for different setup scenarios):
– Pool address
-Wallet address (check here if you need some help setting one up)
5. Done! Now you’re mining!
Understanding How to Mine Profitably
Where to Host Your Miner: Colocation vs. Cloud Mining vs. Home Hosting
Housing and Colocation
When miners start off, they usually ship the ASIC to their home and begin hosting it. They quickly realize how intrusive and likely unprofitable it is to host an ASIC is in your home. Instead, people will host their machines at a colocation even if they only have one or two. Colocations, or “mining farms,” are data centers located in low-cost electricity regions that offer hassle-free, profit-maximizing mining for your rig.
Put simply, cloud mining is buying straight hashing power rather than buying a machine that creates hashing power. This method is similar to the buyers on the Nicehash market. Since you don’t have to invest in the hardware, it’s a great way to get into mining at a low initial cost. It’s also hassle-free and involves less risk. That said, if you are investing a lot more money into mining, it can be more profitable to buy a mining machine for yourself.
For those who are more interested in mining and the technology as a hobby rather than strictly to make a return, hosting in your home can be a lot of fun. It is cool to see the hardware and work with it.
Considerations to Increase Profitability
Given the burgeoning competition in the mining space, a growing concern is that it’s becoming increasingly difficult for miners to be profitable. To combat this, there are several variables to keep in mind to ensure that you achieve a profitable mining operation:
- Consider a Colocation (colo): The hosting cost at a colo such as Minery, may be cheaper than your local electricity rates, allowing you to increase your operating margin.
- Maximize Up-Time: When it comes to mining, NOTHING is more important then up-time. Every minute your mining rig is offline, you are wasting electricity costs and missing out on the opportunity to obtain crypto. It is crucial to build robustness into your design by investing in materials and testing to prepare your rig for a scenario where it overheats or fails otherwise
- Monitor and Alerts: Software such as the one at AwesomeMiner that lets you know when your miners are down so that you can act ASAP.
- Thermodynamic Efficiency: Choose a location and cooling method that allows you to avoid overheating while minimizing your electricity consumption costs.
- Efficiency Bumps
-GPU Bios Modification (Mod) can enable a significant bump in performance
-For those with the technical competency, test different types of mining software and compile the miner software yourselves to avoid paying fees.
6. Consider Discounted Hardware: Keep an eye out for deals on GPUs and ASICs which may mean buying second-hand equipment.
7. ASIC Arbitrage Opportunities: Leverage the following tricks to save money on ASICs:
-Purchase Bitmain coupons for 20% of the coupon price. Historically, it has not been uncommon to find a 100 USD Bitmain coupon selling for 20 USD or less.
-Try to buy miners that are in the first or second batch to ensure best ROI possible.
The long-term profitability of mining a cryptocurrency diminishes over time as network hash power and difficulty increase. The sooner you enter into a market, the higher your likelihood of a profit, but it’s essential to do the due research before diving into an endeavor. However, with the right set up and equipment, even a novice can become an affluent miner.
Biometric technology and ultrarunning: how it works and how it can help you go the distance
The 50K, 50-mile, 100-mile distance and beyond are all characterized as ultramarathons for the simple reason that the distances are farther than a marathon.
But it isn’t just the farther distances that set these races apart from a marathon. When going those extra miles, you are often doing them at a much slower, methodical pace. In many of these races you are running, or rather hiking in the mountains, traversing over rivers, or trekking on long dirt paths.
Rather than burning through sugars and electrolytes like you would a marathon, ultramarathons will require that you go into your fat stores. You will also lose salt and therefore the ability to retain water. You will need to eat while on the go, all while paying close attention to digestion issues that may creep up (or down) in order to complete the goal at hand.
And there are differences between the two, there is one very important similarity: The need to monitor how your body is responding to your training, in conjunction with outside life factors. How are you sleeping? How is your gate, and has it been altered due to injury or even the change of footwear? How is your heart rate, in particular, your heart rate variability (HRV)?
Unfortunately, very few studies have been done on HRV and utramaratons, specifically. This is likely because ultrarunning has only recently emerging from the underground world to the mainstream universe, as has the general use of HRV trackers. Due to this, there is still a bit of a learning curve to overcome.
Even so, the world of biometrics is growing, and the running and ultrarunning world is beginning to embrace the technology to better understand how each individual responds to various factors.
So, how can biometrics help ultrarunners?
For starters, let’s talk about HRV.
Heart rate variability, simply put, is the amount of time between heart beats. And opposed to measuring heart rate (BPM), where a high heart rate indicates excursion and a low rate is indicative of an easier pace or even rest, HRV is much different. Generally speaking, the more variability, the better. A very low HRV is a sign of overtraining, stress, illness, etc., whereas a high HRV shows that your body is in good, resting condition, free of or nearly free from stressors, both mentally and physically.
A low HRV will tell you that you will need to back off of the training plan and perhaps add some lower miles or even rest. It may even show that there are some life stressors that need to be addressed. A High HRV, on the other hand, says that you are ready to run fast and/or go for the (ultra) distance.
In order to determine if your HRV is high or low, you will need to find your own baseline HRV level through the use of a fitness tracker with the available technology. Over the course of several days (preferably during a normal stress week) your wearable device will be used to determine what your average HRV is. This will help you to understand successive readings to find if you are experiencing high or low HRV.
Active and Resting Heart Rate
While heart rate, both active and resting are not as relevant as we once thought, they are still an indicator of exertion and how are body has responded to the exercise, whereas tracking your HRV is not done in real time.
Many ultra runners and group fitness instructor, Dana Anderson use heart rate tracking as an everyday, even all day tool. Anderson, who won the 2016 Javelina Jundred and 2017 Buffalo Run 100 says that she wears her heart rate monitor all day long. Biostrap allows you to monitor active heart rate through integrating with many 3rd-party chest straps. Being able to contextualize active heart rate with other insightful biometrics is a winning recipe for optimal training.
“I monitor my resting heart rate (RHR) to watch out for overtraining. If my RHR goes up a few beats during training, I take an easy day, rest day, or focus on getting more sleep,” she said. I review HR data after runs to make sure I achieved my objective, whether that be improved anaerobic threshold or working in my aerobic zone. When I cross train, I watch my heart rate, it is much more variable for me while cross training. When I run, I pay attention to RPE (rate of perceived excretion) during and then look at numbers after. It took many years of practice and watching my heart rate while running, but now I’m very in-tune with my heart rate. I can usually guess within a beat or two of what the number is while I’m running.”
Determining Sleep Patterns
As important as your training regimen, if not more so is sleep. Sleep is a time when your body and mind rest and repair. In fact, during any given night, you will transition between many different stages of sleep — all of which play a critical role in getting your mind and body on track during the waking hours.
Using a wearable tech device that tracks your sleep will let you know if you are achieving those all-important stages of sleep and if you are getting enough deep sleep and sleep in general. With that information at your disposal, you can make the appropriate adjustments so that you can rest and recover for optimum training and racing.
Now, your gait
Wearable tech devices like Biostrap have the ability to track your gait and analyze it for future exercises. With the combination of a shoe pod and wrist band, you can obtain movement data as well as analyse your movement with that data.
This can help you determine if your gait has changed, where to make adjustments, etc. It really is a great tool.
Overall, using biometrics is a great way to understand your body when training and performing ultra distance races. It takes out the guesswork for you so that you can pay close attention to things your body needs. You may not feel stressed, but your HRV shows you are. This will help you take a closer look at your training and even life. You may be feeling tired during a pace that should be easy, and your active heart rate lets you know that you are pushing a little too hard at that moment.
You may think that you’re getting enough sleep, but find though your fitness tracker that you are not getting enough deep sleep. Perhaps you have purchased a new pair of running shoes that has altered your gait making you more prone to injury.
All of these things can be helpful when training for your first or even 10th ultra marathon, so that you can go the distance.
Fixing Bitcoin’s Energy-consumption Problem
Bitcoin mining consumes a lot of energy. Every once in a while, someone compares this to another random metric — say, the energy consumption of Ireland — and it induces a collective gasp. How can this thing be sustainable?
Well, it probably isn’t. But, long-term, it might not be that big of a deal.
It’s true that Bitcoin mining is an awful energy drain. Hundreds of thousands of application-specific integrated circuits or ASICs — specific hardware aimed exclusively for mining cryptocurrencies — hum in huge halls, mainly located in China, and use enormous amounts of electricity to create new bitcoins. They also power the Bitcoin transaction network, but they do it in a horribly inefficient way. The fact that a huge chunk of China’s electricity comes from fossil fuels makes the situation even worse.
It just seems so wrong, and on some levels, it is.
But things aren’t that simple. We don’t know, exactly, how power-hungry Bitcoin really is. And whatever the figure is, Bitcoin certainly doesn’t need that much energy to run. Furthermore, energy consumption issues can potentially be fixed with a future upgrade of the Bitcoin software, which is easier than, say, reducing the energy footprint of Ireland. Finally, there are other cryptocurrencies out there working on a solution to this problem.
Despite what you might’ve read, we don’t have exact figures on Bitcoin’s energy consumption. A site called Digiconomist keeps stats on how much energy Bitcoin is consuming, and it’s the primary source for the stories circulating on the subject. Some of these stats look horrific: Bitcoin’s current energy consumption is 30.2 terawatt-hours (TWh), which is more than 63 specific countries, and a single Bitcoin transaction consumes enough energy to power nearly 10 U.S. households for an entire day. But we shouldn’t blindly trust those numbers.
Getting exact energy consumption figures for miners, many of whom are secretive and located in China, is not easy, so Digiconomist uses a very roundabout way to make its estimates. The site makes quite a few broad assumptions — for example, that miners, on average, spend 60% of their revenues on operational costs, and that for every 5 cents spent on those costs 1 kWh of electricity was consumed. It’s impossible to say how accurate Digiconomist’s index is, but it could be off by some measure.
Transactions don’t matter
Furthermore, the energy consumption is rising because of Bitcoin’s quite insane price rise, not because the network actually requires it.
Bitcoin’s price is at $10,466 at time of writing, up more than 1,000% since the beginning of the year. This price growth is a huge incentive for miners to add even more ASICs and use up even more energy, but it doesn’t really have to do much with the number of transactions on the network. In fact, the number of transactions on Bitcoin’s network hasn’t significantly increased in a year.
There are two reasons for this. Bitcoin’s network can’t handle many more transactions (though a recent software upgrade, yet to take full effect, should improve this). Furthermore, Bitcoin isn’t exactly doing its job the way its creator, Satoshi Nakamoto, had intended. Due to its price rise, not many owners actually use their bitcoins to purchase goods; instead, everyone is either hoarding it or speculating with it.
This means that talking about the energy cost of one Bitcoin transaction is misleading. A figure that’s thrown around often is the energy cost of one Visa transaction (also a very rough estimate), which is orders of magnitude smaller than that of one Bitcoin transaction. But for Bitcoin, the transactions are not the problem.
In fact, you could theoretically run Bitcoin’s entire network on a dozen 10-year old PCs. It wouldn’t be very secure from attacks, though, which is another reason why miners are constantly competing for dominance; no one wants to see any one miner control 51% of the network as that would enable them to take over Bitcoin completely.
But it’s important to point out that the fact that Bitcoin is currently an enormous energy drain is not due to some irreparable flaw in Bitcoin’s protocol. Bitcoin can run more efficiently; it could probably run more efficiently than Visa as it doesn’t require offices, staff and other overhead energy costs.
For that to happen, though, something needs to change.
The problem already has a solution…
One project Bitcoin could take cues from is Ethereum, the second largest cryptocurrency right now. According to Digiconomist, Ethereum uses roughly three times less energy than Bitcoin; and yet there are twice as much transactions per day on Ethereum’s network.
And even that could get a lot better in the near future, as Ethereum’s development team plans to gradually switch to a completely different mechanism of verifying transactions. Called proof-of-stake, it replaces the current system, called proof-of-work (also used by Bitcoin). Instead of having miners solving complex math calculations, it would reward owning the coins. The concept isn’t implemented in Ethereum yet (read here for a detailed explanation) but if it does work as intended, the energy costs, compared to proof-of-work, would be orders of magnitude smaller.
Bitcoin’s developers aren’t looking to switch to proof-of-stake very soon, but they are working on a solution called Lightning Network that would ideally vastly increase the number of transactions on the network without the need for additional hash power.
…but you never know with Bitcoin
So is Bitcoin’s lust for energy just a temporary issue that will easily go away? Probably not. Ethereum’s leadership has successfully implemented major changes on the network in the past without many problems. Bitcoin, on the other hand, hasn’t been able to implement a far more simple upgrade for years, as any upgrade needs a consensus of nearly all users of the network or a (potentially dangerous) hard fork. And Lightning Network, as promising as it is, is just a concept at this stage.
But Bitcoin’s problems aren’t insurmountable. The solutions are already out there. Sooner or later, Bitcoin will have to adapt.
If it doesn’t, in the long run some other cryptocoin will solve it and take its place. Bitcoin has the first-mover advantage, but that quickly wears off when everyone else is leaner, faster, and more efficient than you. And that’s perfectly alright; Bitcoin and its energy woes might be forgotten some day, but cryptocurrencies are here to stay.
Why hasn’t the Internet of Things become a thing? Trust me—the name isn’t helping. What does “the Internet of Things” even mean? Our household objects do not have their own internet. There’s no little Twitter for thermostats, [...]
FREQUENTLY ASKED QUESTIONS 1. Who is TEKSTART? TEKSTART is an "adventure capital" firm that provides interim sales, marketing and business development services to start-up companies. The company was founded in 1997. Headquarters are in Las [...]