Demande.tn

🔒
❌
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 22 septembre 2017Divers

كل التسريبات الصادرة عن هاتف جوجل بيكسل 2 … هل سيتمكن من التفوق على ايفون 10؟

Par علي عبدو

أراجيك

بعد ردود فعل سلبية على تصميم الهاتف الذي تم تسريبه هل سيتمكن هاتف جوجل بيكسل 2 من منافسة ايفون 10؟ خاصة بعد استحواذ جوجل على الفريق المسؤول عن الهاتف في HTC

أراجيك

Announcing the Matterport3D Research Dataset

Par Matt Bell

At Matterport, we’ve seen firsthand the tremendous power that 3D data can have in several domains of deep learning. We’ve been doing research in this space for a while, and have wanted to release a fraction of our data for use by researchers. We’re excited that groups at Stanford, Princeton, and TUM have painstakingly hand-labeled a wide range of spaces offered up by customers and made these labeled spaces public in the form of the Matterport 3D dataset.

This is the largest public dataset of its kind in the world, and the labeling of this dataset was a very significant effort.

The presence of very large 2D datasets such as ImageNet and COCO was instrumental in the creation of highly accurate 2D image classification systems in the mid-2010s, and we expect that the availability of this labeled 3D+2D dataset will have a similarly large impact on improving AI systems’ ability to perceive, understand, and navigate the world in 3D. This has implications for everything from augmented reality to robotics to 3D reconstruction to better understanding of 2D images. You can access the dataset and sample code here and read the paper here.

What’s in the dataset?

This dataset contains 10,800 aligned 3D panoramic views (RGB + depth per pixel) from 194,400 RGB + depth images of 90 building-scale scenes. All of these scenes were captured with Matterport’s Pro 3D Camera. The 3D models of the scenes have been hand-labeled with instance-level object segmentation. You can explore a wide range of Matterport 3D reconstructions interactively at https://matterport.com/gallery.

Why is 3D data important?

3D is key to our perception of the world as humans. It enables us to visually separate objects easily, quickly model the structure of our environment, and navigate effortlessly through cluttered spaces.

For researchers building systems to understand the content of images, having this 3D training dataset provides a vast amount of ground truth labels for the size and shape of the contents of images. It also provides multiple aligned views of the same objects and rooms, allowing researchers to look at the robustness of algorithms across changes in viewpoint.

For researchers building systems that are designed to interpret data from 3D sensors (e.g. augmented reality goggles, robots, phones with stereo or depth sensing, or 3D cameras like ours), having a range of real 3D spaces to train and test on makes the process of development easier.

What’s possible with this dataset?

Many things! I’m going to share a few of the areas of research Matterport is doing.

We’ve used it internally to build a system that segments spaces captured by our users into rooms and classifies each room. It’s even capable of handling situations in which two types of room (e.g. a kitchen and a dining room) share a common enclosure without a door or divider. In the future, this will help our customers skip the task of having to label rooms in their floor plan views.

We’re also experimenting with using deep learning to fill in areas that are beyond the reach of our 3D sensor. This could enable our users to capture large open spaces such as warehouses, shopping malls, commercial real estate, and factories much more quickly as well enable as new types of spaces to be captured. Here’s a preliminary example in which our algorithms use color and partial depth to predict the depth values and surface orientations (normal vectors) for the areas that are too distant to be picked up by the depth sensor.

We’re also using it to start fully segmenting the spaces captured by our customers into objects. Unlike the 3D models we have now, these fully segmented models would let you precisely identify the contents of the space. This enables you to do a wide range of things, including automatically generating a detailed list of the contents and characteristics of a space and automatically seeing what the space would look like with different furniture.

Ultimately, we want to do for the real world what Google did for the web — enable any space to be indexed, searched, sorted, and understood, enabling you to find exactly what you’re looking for. Want to find a place to live that has three large bedrooms, a sleek modern kitchen, a balcony with a view of a pond, a living room with a built-in fireplace, and floor-to-ceiling windows? No problem! Want to inventory all the furniture in your office, or compare your construction site’s plumbing and HVAC installations against the CAD model? Also easy!

The paper also shows off a range of other use cases, including improved feature matching via deep learning-based features, surface normal vector estimation from 2D images, and identification of architectural features and objects in voxel-based models.

Why is this preferable to a synthetic 3D dataset?

Synthetic datasets are an exciting area of research and development, though they have limitations in terms of how well systems trained purely on a synthetic dataset work on real data. The tremendous variety of scene appearances in the real world is very difficult to simulate, and we’ve found synthetic datasets to be most useful as a first round of training before training on real data as opposed to the main training step.

What’s next?

We’re excited to hear what you all end up doing with this data! As noted above, you can access the data, code, and 3DV conference paper here and we are excited to partner with research institutions on a range of projects.

If you’re passionate about 3D and interested in an even bigger dataset, Matterport internally has roughly 7500x as much 3D data than is in this dataset, and we are hiring for a range of deep learning, SLAM, computational geometry, and other related computer vision positions.

We’d like to thank Matthias Niessner, Thomas Funkhouser, Angela Dai, Yinda Zhang, Angel Chang, Manolis Savva, Maciej Halber, Shuran Song, and Andy Zeng for their work in labeling this dataset and developing algorithms to run on it. We’d also like to thank all the Matterport camera owners who gave us permission to include their 3D models in this dataset.

Matterport’s internal work in this area was made possible by Waleed Abdulla, Yinda Zhang, and Gunnar Hovden.

Enjoy the world made possible by these spaces! We certainly have!


Announcing the Matterport3D Research Dataset was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Crafting a PWA: Part 1 — Setting up the development workflow

Par Ankeet Maini

PWAs have been gaining popularity for quite sometime now. Crafting a good, performant experience is continuous journey.

So before even embarking on the PWA journey, we should invest time on setting up the development workflow.
This separates great apps from apps that were great once.

For example let’s consider a GitHub repository which has Hacker News front end implemented. This is not a PWA yet. It’s made with React and Redux.

I want every Pull Request on this repository from now on to be tested and audited for performance issues.

How do I do that? Using a CI server, like Travis. With Travis we’ll add support for lint checks, automatic stage deployments, audit using lighthouse and would be in absolute control over the changes happening on the app at all times.

Step 1 — Adding Travis CI

Enable Travis CI for your repository by going to your profile page

Once you enable the switch, you’d then need to add a .travis.yml which will instruct Travis on what to do at different stages of the build.

Above is a simple .travis.yml which runs lint checks on PR.

Step — 2: Adding stage deployments

  1. Being able to see the code changes in action helps the reviewer a lot to merge your changes with confidence.
  2. I’ll use Now to deploy your code to a unique stage URL once a Pull Request is created. You can use Surge as well.
  3. Our challenge is to integrate Now deployments from Travis so that anytime a PR is created/updated the new changes are deployed and ready to be seen/audited by other reviewers.
  4. now-travis is an excellent utility to deploy to now. The thing with now deployments is — Every time you deploy a project, now will provide you with a new, unique URL.
  5. But now-travis doesn’t provide the ability to save the URL so that we can use it later to run lighthouse audits. I’ll cover this bit in a little while.
  6. So I added a change to save the deployed URL to a temporary file and created a Pull Request which hasn’t been merged as of now. You can use this fork for your setup: https://github.com/ankeetmaini/now-travis
  7. npm i -D ankeetmaini/now-travis This will add now-travis as a dev dependency to your project.
  8. Follow the instructions in the README to integrate now deployments with Travis.
  9. now uses npm start or npm run now-start to start your application. It gives prepference to now-* commands so in this case now-start would be executed instead of npm start. This is useful because in dev mode also I’ll use npm start and in production I may need to pass NODE_ENV=production. Or you may need to send something else entirely.
  10. Update your .travis.yml to run now-travis after successful build. See this line of code in after_script `NOW_ALIAS=react-hn-ankeetmaini node_modules/.bin/now-travis — file=now_url`

With this setup now Travis will deploy every pull request¹. You can see in the below image that a Staging deployment was done.

CI checks PR for lint, deploys and audits app performance

Step — 3: Integrating Lighthouse

  1. Lighthouse is an audit tool by Google which checks your app against a number of points and scores it.
  2. It’s vital to run this audit continuously, so as to always keep our app fast and performant. We’ll use lighthouse-ci to integrate it with Travis.
  3. Add lighthousebot as a collaborator in your repository, so that it can update the status of your PR and post a comment with the Lighthouse score.
  4. Request an API key and add an ENV variable in the Travis. This isn’t necessary as of now, but will be done in the future.
  5. Since now deploys our app to a unique URL, we save it in a file named now-url. We need to read this URL from the file and give it as an input to lighthouse-ci.
  6. To do this I created a file run-lighthouse.js at the root of the folder, with the following code. lighthouse-ci takes following options and we’re passing the same in the below file.

7. Lastly add an entry into .travis.yml in the after_script section to run the above file after the stage deployment is done. — file is the argument which takes in the file name from which the deployed URL needs to be read. This will now evaluate your stage deployment and fail² the PR if you’ve not passed a minScore and also post a comment with your lighthouse score, see this PR https://github.com/ankeetmaini/react-hn/pull/9

./run-lighthouse.js --file=now_url

Congratulations!, you’ve successfully setup an awesome workflow. All the code used in the citations lives here.

[1] since free OSS plan in now can only have three active deployments at a time, you might need to manually remove deployments using now rm id

[2] right now the lightousebot will only post a comment and not fail your PR because of insufficient rights on the repository, the above screenshot which shows failing of my PR is because I’ve run a separate instance of lighthouse-ci for demo purposes. See this issue for more details


Crafting a PWA: Part 1 — Setting up the development workflow was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Lordmancer II — a mobile MMORPG promising to let its players mine cryptocurrency while playing

Par Anton Telitsyn

Players’ desire to somehow earn money while playing is nothing new.

Real money trading in games has always been illegal or at least strongly discouraged by most game developers, especially in MMORPGs. Coincidentally, it’s in MMORPG where real money trading is most common, since those games always have virtual economies in which game “gold” and other resources have some real value. Real money trading can be a big deal, with the most expensive known purchase being for $6 mln in a game called Entropia Universe!

Lordmancer II is a MMORPG for mobile phones and tablets running Android and iOS. The game is basically ready to ship. It is currently in soft launch in Russia. There’s plenty of game content still to be added, but the framework of the game is there. Retention and payments metrics are already looking good.

Lordmancer II seeks to legalize typically “illegal” market for game items by introducing a cryptocurrency token as a second “hard currency”. The game has two “hard currencies” dubbed LordCoins (LC) and Crystals, and a soft currency, “gold”. The most rare and precious weapons and items will be available for purchase only for LordCoins which is, by its nature, a cryptocurrency token based on Ethereum.

Scheme of Token turnover

Lordmancer II encourages its players to “farm” rare and valuable weapons and artifacts in the game and sell them to other players, earning LCs in the process. Those LCs they will subsequently sell for BTC or ETH at a crypto exchange. Also, players will be encouraged to “cultivate” characters for sale.

To make the whole system less complicated for those not familiar with cryptocurrencies, Lordmancer II will offer ways of purchasing LordCoins directly for fiat money, inside and outside of the game. This way, rich players (“whales”) ready to spend a lot on game items will have a convenient way to purchase rare items, while those players willing to earn some money will have an opportunity to do it without violating game policy, in an open, fair, supported and transparent way.

With each trade made with LCs, the game will burn a fraction of the tokens involved. This will result in ever decreasing amount of LCs being available on the market, which in turn will push the price of remaining LCs ever higher.

Lordmancer II is already in open beta test in Russia and is going to be launch globally in 2018.

Lordmancer II pre-ICO was sold out in 5 days in August.

The main ICO round starts on October 23.

More on the game and its ICO here: http://lordmancer2.io/

Follow Twitter, join Facebook, Reddit, or Telegram to discuss the project.


Lordmancer II — a mobile MMORPG promising to let its players mine cryptocurrency while playing was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

An Overview of Cryptocurrencies for the Savvy Investor.

Par Alex Krüger

An Overview of Cryptocurrencies for the Savvy Investor

“Never invest in a business you cannot understand.” -Warren Buffett

The Blockchain technology is revolutionary. Yet investors are throwing millions at cryptocurrencies offering terrible value propositions, and despite the recent market drop on the back of the China ban of cryptocurrency exchanges, cryptocurrencies are still in bubble mode.

Prices may have risen too far too fast: the aggregated cryptocurrency market capitalization has gone from USD 18 billion to USD 135 billion between the start of 2017 and now, a 650% increase. Many valuations are outrageous; cryptocurrencies with no intrinsic value are currently worth hundreds of millions. Investors’ exuberance notwithstanding, the technology is groundbreaking, fundamentals are often excellent, and the hundreds of millions poured into strikingly poor investments should become a drop in the bucket once cryptocurrencies undergo mass adoption.

Therein lies a seeming contradiction. Investing in businesses with solid fundamentals would typically represent a good investment rather than a poor one. If businesses fundamentals are great, how can there be a problem? The explanation is simple. Investments in blockchain projects are not going through traditional channels (i.e. stocks, bonds, etc) but rather through a new channel: cryptocurrencies themselves; and cryptocurrencies often represent seriously flawed investment vehicles.

This article is meant as an in-depth overview of cryptocurrencies for the savvy investor. It covers the following:

  • why bitcoin and ethereum have intrinsic value,
  • a technical overview of cryptocurrencies and digital tokens,
  • Initial Coin Offerings (ICOs),
  • why is there a bubble, and
  • why bubble notwithstanding prospects are bright.

Bitcoin

Critics misunderstand bitcoin. Bitcoin behaves like a financial asset. Bitcoin is used like a financial asset. But bitcoin actually represents property, not a financial asset. In other words, bitcoin is property that trades like a financial asset. Why do I say bitcoin represents property? Consider the following definition: “A financial asset is a non-physical asset whose value is derived from a contractual claim” (i.e. a financial asset represents a liability for someone else). Bitcoin is a non-physical asset, yet it does not represent any contractual claim. It is an asset that is not a liability of any entity or person.

From this vantage point, bitcoin is similar to gold. Gold also represents property that trades like a financial asset. Gold derives value from people’s perception of gold as an alternative to fiat currencies and a long-term store of value. And gold undoubtedly has intrinsic value. It is a metal that conducts electricity, does not tarnish, and has numerous real life uses. About half of gold’s demand comes from jewelry and technology (gold is used inside electronics). Similarly, bitcoin’s value comes from people’s perception of bitcoin as an alternative to fiat currencies and a store of purchasing power. But bitcoin can also be considered to have intrinsic value. Think of bitcoin as an Unhackable Piece of Electronic Art, that can only be transferred by holding cryptographic keys. Creating a bitcoin requires advanced coding and massive computing power. Bitcoin is mathematical art that cannot be copied.

Even if one day bitcoin is deemed a dismal technology, given its fixed maximum supply, bitcoin would likely retain value because it was the first of its kind. So best case scenario, bitcoin remains at the forefront of cryptocurrencies; worst case scenario, it becomes a prized relic. Unless of course some day someone figures out how to hack the Bitcoin blockchain, in which case bitcoin would go down to zero very quickly.

Ethereum

Ethereum is the second most popular cryptocurrency and the king of Blockchain-As-A-Service. It is a programmable blockchain with a Turing-complete scripting language. Like Bitcoin, it provides a decentralized peer to peer electronic cash system. Unlike Bitcoin, Ethereum allows for the creation of smart contracts (i.e. programming code that auto-executes once certain conditions are fulfilled). And unlike Bitcoin, with Ethereum developers can build and deploy decentralized applications (e.g. an Ethereum-based decentralized Facebook).

Smart contracts and decentralized applications enable Ethereum to uproot everything from basic user applications to how business is conducted. Consider a decentralized Facebook built on Ethereum where users control their own data. Consider decentralized self-executing insurance contracts or financial derivatives. Consider decentralized incorruptible voting platforms. Consider decentralized prediction platforms. The list of possibilities is endless. In short, one may think of Ethereum as a decentralized virtual machine or supercomputer that could redefine the world as we know it.

Ether is the cryptocurrency of the Ethereum blockchain. It is both a cryptocurrency and the means of payment for accessing the Ethereum network. Ethereum users pay with ether for the computing power they are using. Think then of ether as the fuel for powering the Ethereum network. This is the source of ether’s intrinsic value. Ether’s current market cap is USD 27 billion. It should be noted that even though Ethereum is the blockchain and ether is the currency, most people refer to ether as ethereum.

A Technical Overview of Cryptocurrencies

— Cryptocurrencies, virtual currencies, electronic coins, digital coins, digital tokens and blockchain tokens are different names for the same thing.

— A cryptocurrency is a chain of digital signatures stored on a decentralized public ledger known as a blockchain (for an in-depth explanation, refer to the original Bitcoin whitepaper by Satoshi Nakamoto).

— Having a cryptocurrency means having a private key (similar to a password) giving the holder the ability to transfer the cryptocurrency to someone else. Private keys are stored in digital wallets.

— Cryptocurrencies are transferred from one owner to another by adding a transaction to the blockchain (great explanation here).

— Blockchains are kept secure from hacking through the work of validators, who validate transactions (great explanation here).

— Validators are given cryptocurrencies as reward/payment every time they validate a transaction (i.e. cryptocurrencies provide the economic incentive for people to become validators). Validators may also be awarded transaction fees paid by the sender.

— There are multiple consensus mechanisms for validating transactions. The main ones are:

  • Proof-of-Work (PoW): validators validate transactions by running an algorithm to solve a cryptographic puzzle. This is known as mining. Mining creates new coins. Validators are rewarded with new coins and transactions fees (if any).
  • Proof-of-Stake (PoS): validators validate transactions by staking (“depositing”) cryptocurrencies. No new coins are (usually) created. Validators are rewarded with transaction fees only.

— Cryptocurrencies can be created by mining (e.g. bitcoin) or by simply allocating coins to an address (e.g. ripple). The latter is known as pre-mining. It is convention to refer to non-mined coins as pre-mined, even though doing so is technically incorrect if the coin is not mine-able, such as ripple. The term pre-mined comes from the practice by blockchain developers of creating mine-able coins for themselves before releasing the blockchain’s source code to the public, allowing the public to mine.

— Cryptocurrencies can be defined as Native Tokens, which are intrinsic to a blockchain and used for validations (e.g. bitcoin), and Non-Native Tokens, which are created on top of a programmable blockchain such as Ethereum, and used for multiple purposes (more on that later).

  • Creating a token on Ethereum is as easy as writing 25 lines of code. This has made Ethereum the most widely used protocol for non-native token creation. Non-native tokens can be either mined or pre-mined, although they generally are fully pre-mined.
  • The name digital token is mostly used in reference to cryptocurrencies built on the Ethereum platform (i.e. Ethereum tokens), even though technically all cryptocurrencies are digital tokens.

— Cryptocurrencies can also be classified as Protocol Tokens or App Tokens.

  • Protocols are sets of rules, while applications are computer programs built on top of protocols.
  • There is one native protocol per blockchain. Non-native protocols can be built on top of programmable blockchains such as Ethereum.
  • Protocol tokens are required by a protocol to function. Protocol tokens can be both native and non-native. All native tokens are protocol tokens.
  • App tokens are not required by an application or protocol. Instead, App tokens are generally used by the application users to access the application’s services.

Initial Coin Offerings

The public can acquire tokens either through mining, by purchasing in secondary markets (i.e. through peer-to-peer transactions or in exchanges), or by participating in an Initial Coin Offering (i.e. purchasing directly from token creators). Initial Coin Offerings (ICOs) are similar to Initial Public Offerings (IPOs) where investors are buying cryptocurrencies instead of shares. There are some notable differences between the two:

  • Shares give shareholders equity in a company, while cryptocurrencies do not give coin holders any equity.
  • Shares are regulated as securities, while coins are not (although this is changing, see for example recent US developments here).
  • Cryptocurrencies are usually paid for with other cryptocurrencies, which facilitates participation of international users.

One can think of ICOs as democratized venture capital, or venture capital meets crowdfunding. ICOs give blockchain enthusiasts direct and easy access to investing in blockchain start-ups. ICOs enable blockchain start-ups to raise early stage capital bypassing venture capital firms, without even diluting equity ownership. And ICOs can also be great for venture capital firms willing to give up the equity ownership associated with traditional financing in exchange for a highly liquid investment (typical venture capital investments are illiquid and may take many years for investors to cash out).

The major downside of ICOs is the lack of regulatory oversight, which allows those raising funds to offer minimal disclosures for investors, “exaggerate benefits, fail to identify risks, and create unsubstantiated hype”. Fund raisers may even be anonymous, such as is the case with the extremely popular Bitconnect (BCC, market cap USD 910 million — note by definition market cap is not the number listed by coinmarketcap.com, computed using Circulating Supply, but rather the often considerably larger number resulting from multiplying Price by Total Supply).

For further reading about ICOs I recommend this article analyzing ICOs pros & cons, as well as this article covering the lack of ICO disclosure regulations.

ICOs & Non-Native Tokens

While Ethereum has made it easy for developers to create digital tokens, ICOs have made it easy for investors to access those digital tokens. The lax regulatory framework coupled with the ease of matching entrepreneurs with eager investors has resulted in a massive ICO boom. It is in the ICOs of non-native tokens that investors’ irrational exuberance becomes apparent.

Uninformed or informed, unsophisticated or sophisticated …. investors of all kinds are participating in ICOs and throwing hundreds of millions at often worthless tokens that offer the investor little beyond possible gains from selling tokens later at a higher price.

Picture Pether Block (pun intended), a sharp entrepreneur seeking to raise funds. Imagine Pether raises funds not by issuing equity (stocks) or legal promises to pay funds back (loans, bonds), but instead by giving out pretty bits of paper with no legal backing saying he plans to pay back. Now imagine Pether actually gets funding by giving out pretty bits of paper that do not even promise to pay back. Furthermore, imagine a case where Pether is actually anonymous, he did not even have to disclose his identity to raise funds. This is happening in some ICOs. Ponzi schemes abound. OneCoin is the most famous uncovered Ponzi scheme. Bitconnect, a cryptocurrency that offers guaranteed 149% annualized returns (assuming daily reinvestment) plus variable returns generated by a “volatility trading bot”, is in my humble opinion the most striking Ponzi scheme of present times.

Think about it…

  • Buy a share, and get legal ownership of a company.
  • Buy a bond, and obtain the right to receive interest payments.
  • Buy bitcoin, and receive a liquid asset that derives its value from the computing power dedicated to creating such piece of mathematical art.
  • Buy ether, and receive a liquid asset that derives its value from both the computing power dedicated to creating it, as well as its value as means of payment for using the Ethereum supercomputer.
  • Buy any native token, and receive a cryptocurrency providing economic incentives for a blockchain to function.
  • Buy a non-native token … and what do you receive?

There are eight categories of Non-Native Tokens:

  1. Protocol tokens. (e.g. Augur: REP, market cap USD 200 million).
  2. Tokens issued for accessing the platform/services of the issuing company; future services, to be precise, as in most cases tokens are issued when the platform is no more than an idea. Think of them as utility tokens or Gift Cards. (e.g. Factom: FCT, market cap USD 160 million).
  3. Asset-backed tokens, where the blockchain asset represents a claim on an underlying asset, and to claim the underlying one sends the blockchain asset (i.e. the token) to the issuer. (e.g. Tether’s USD: USDT).
  4. Token issued under the promise of participation in future revenues, even though there typically is no legal obligation for companies to honor such promises. Participation percentages and timing are almost always left undefined. (e.g. DigixDAO: DGD, market cap USD 150 million).
  5. Tokens said to represent equity in the issuing company, giving token holders votes as shareholders, participation in future dividends, and supposedly ownership of the company as well. (e.g. Lykke: LKK, market cap USD 410 million).
  6. Tokens issued under the promise of appreciation backed by promises from the company to repurchase and destroy tokens once sustainable revenue materializes. (e.g. Populous: PPT, market cap USD 150 million).
  7. Tokens issued with no value proposition whatsoever. Think of them as toy casino tokens. (e.g. Steemit: STEEM, market cap USD 290 million).
  8. Potential scams (e.g. Veritaseum: VERI, market cap USD 8.9 billion — note only 2% of coins are in circulation).

Protocol tokens (#1) and gift card tokens (#2) are certainly valuable. If the associated blockchain or service becomes popular, their value will rise accordingly. They represent a bet in the success of the underlying technology.

Asset-backed tokens (#3) are useful (e.g. it is easier to transfer ownership of 1000 ounces of gold in digital format than in physical format). Their downside is the credit risk of the issuing company (what if they go bust or they run away with the money?).

Tokens that offer revenue participation (#4) could be very valuable. Ideally participation conditions (percentages, timing) would be defined prior to the ICO, and the distribution of profits would happen autonomously following instructions hard-coded in a smart contract. Some issuers get creative and define these tokens as “Economic Shares” or “Non-Ownership Shares”, in an effort to convey that tokens are shares, which is not the case.

Equity tokens (#5) are similar to participation tokens with the explicit mention of “dividends” and/or voting rights. Equity tokens have been mostly avoided by issuers to reduce the probability of regulators classifying tokens as regulated securities. Marketing of equity tokens is generally misleading, because simply calling a token a share does not make the token a share. A token that isn’t backed by equity documentation cannot be equity. Equity placements require documentation filings with a regulator and the publication of a prospectus for investors. Furthermore, even if equity documentation were there, it is not clear equity tokens could legally represent shares (laws are country dependent and subject to change).

The latter three types of non-native tokens have little if any intrinsic value. Yet investors gobble them up, often failing to differentiate between a great project and great value. A project may represent a fantastic idea, while the associated investment vehicle may nonetheless offer terrible value for investors. And we are not only talking about great ideas here; one may get any idea (even a terrible one such as the Fuck Coin), bundle it with a no-strings-attached token, and investors’ money will likely follow. Somebody even launched an ICO for the Useless Ethereum Token and raised $40,000.

The Cryptocurrency Bubble

Despite the recent market correction following China’s ban of cryptocurrency exchanges, Bitcoin is still up 305% this year. Ether is up 3400% this year. Does this represent a bubble? Not necessarily. ICO volume is up 675% this year: the all-time cumulative ICO funding is $2.3 billion, it was $295 million by January 1st. If you start the count on May 1st, then bitcoin is up 190%, ether is up 240%, and ICO funding is up by 420%.

Why do ICOs matter that much? After all, bitcoin and ether represent two-thirds of the combined cryptocurrency market cap. So why do ICOs matter? Because ICOs are mostly paid for in bitcoin and ether, and also a great number of tokens are Ethereum tokens. ICOs are driving prices!

Given that it is real projects driving prices, one could then argue there is no bubble. Right? Now remember how investors are throwing hundreds of millions into ICOs in which the tokens are gift cards at best and goodwill promises at worst. Many of these projects would not receive a penny from investors without shiny coins involved. Companies’ are getting funding with tokens that represent no liability, yet investors convince themselves those tokens give investors the right to participate in the growth of the business. Investors are behaving irrationally. Investing in an asset whose value depends on the goodwill of the company’s management represents bubbly behavior, allowing one to conclude that there indeed is a bubble.

Publicly traded bitcoin investment vehicles are visible proof of the bubble. In the absence of bitcoin exchange traded funds (ETFs) in which to invest (the SEC is yet to approve any), US asset managers and investors seeking exposure to bitcoin without having to buy bitcoins have flocked to the Bitcoin Investment Trust (GBTC), a Canadian publicly quoted security supposed to track the performance of bitcoin as its fully backed by bitcoins. Demand for GBTC is so high that GBTC currently trades at 90% premium over its net asset value (i.e. GBTC does not track bitcoin well at all).

Another great example of the bubble is First Bitcoin Capital, a publicly traded Canadian company (BITCF) that claims to be a vertically integrated Bitcoin entity. This is a company that pays dividends with coins itself makes up (i.e. TeslaCoilCoin). Check this Bloomberg article for a fun read on the subject. It’s a lot of fun, for as long as you are not of those who bought BITCF in August just because it was one of the few publicly traded alternatives to invest in bitcoin. On August 24 the SEC suspended trading on BITCF for 10 days “because of concerns regarding the accuracy and adequacy of publicly available information about the company”. On September 8 BITCF resumed trading, opening 69% lower.

Still need convincing? The latest Oaktree Capital memo by Howard Marks features a list of nine necessary conditions for a bubble to occur— it states “a few [of these conditions] will give us a bull market; all of them will deliver a boom or bubble”. It’s a noteworthy read. It’s also noteworthy that all conditions are undoubtedly present in the cryptocurrencies market.

Market Outlook

Will the bubble continue? I believe it will. China ban notwithstanding, the ICO gold rush is nowhere near its end. Most institutional investors have yet to participate in the asset class. There are large sums of money from institutions and high net worth individuals about to enter the market through newly minted hedge funds. Most retail investors don’t know how to get their hands on bitcoins and ether. Relatively few people understand how Bitcoin works, let alone Ethereum. And polls indicate extremely few women are participating in the bitcoin rush (coin.dance publishes a weekly poll called “Bitcoin Community Engagement by Gender”, with the percent of male participants consistently above 95%). Bitcoin was created eight years ago, yet the blockchain industry is still in its infancy and mass adoption is yet to happen. Prospects are bright.

The main market risk is the potential of government intervention. China just banned all cryptocurrency exchanges, and nothing stops other countries from following that route. Governments are not precisely ecstatic with cryptocurrencies ability to avoid capital controls, nor with its use by tax evaders and money launderers. Furthermore, you can rest assured even the most pro free-markets Western governments would be quick to ban or heavily regulate cryptocurrencies if they were to grow large enough to have an impact in central banks’ ability to dictate monetary policy.

That being said, one may then ask, do current levels represent a good price to buy? Look into the following for answering that question:

  1. On September 4 China banned ICOs, and on September 15 Chinese regulators announced cryptocurrency exchanges must stop trading by September 30. This has caused a significant price drop. Bitcoin fell from $4400 on Sep/4 to $2970 on Sep/15, before bouncing up around 30% on record volume on that same day. The ban is shutting out a fifth of current worldwide demand (for example BTC/CNY volume stands at 18% of worldwide volumes, per bravenewcoin.com). This will diminish capital flows into cryptocurrencies, but will not affect long-term fundamentals.
  2. Ether has a history of flash crashes: on Jul/18/2017 ether dropped and bounced back a full 20% in just 3 seconds (it happened on the now defunct BTC-e exchange), and on Jun/21/2017 ether dropped from $319 to $0.10 in seconds, to almost fully recover in less than two minutes (it happened on the GDAX exchange; note GDAX did not cancel trades, those who profited from buying the crash kept their profits, yet GDAX compensated out of pocket those who lost money during the crash). An investor could use limit orders to take advantage of flash crashes.
  3. The key determinant of prices is capital flows. The information for most upcoming ICOs is publicly available, and future ICO volumes can be estimated. Furthermore, institutional money is on its way. The day the SEC approves a cryptocurrency ETF, funds will pour in. Consider that current total cryptocurrencies market cap represents just 0.17% of assets managed by the top 400 institutional asset managers.
  4. Think of market penetration. Some estimates indicate there are three million cryptocurrency users, which represents 0.14% of the 2.1 billion people in the world between 14 and 65 who have internet access. Can you imagine market penetration increasing to 5% within five years? That would mean 105 million users. What would happen with price then? Jeremy Liew, Snapchat’s first investor, thinks bitcoin could hit 400 million users by 2030, taking its price to $500,000. Would that be outrageous?
  5. Bitcoin is expected to hard fork again by mid-November, bringing significant uncertainty regarding governance of the Bitcoin protocol and even greater 2-way price volatility. By itself, this is good reason to be bearish in the short term.

Patience, decisiveness and skepticism are crucial tools in the toolkit of the savvy investor. Is it time to buy? You decide.

Before you go…

If you enjoyed this, please consider showing your support by clicking on the Clapping Hands button, as well as sharing the article: Facebook | Twitter | Reddit | LinkedIn | Email. And don’t hesitate to follow me on Twitter to stay connected.

______________________________________________________________________

Disclaimer: this article represents the author’s opinions. It is for informational purposes and should not be regarded as investment advice.


An Overview of Cryptocurrencies for the Savvy Investor. was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Search Engine Optimization en vogue

Par Andrew Lucker

All SEO claims should come with a warranty

There is a lot of contradictory advice on SEO. For example something as simple as whether http or https is better is up for debate. Google has themselves announced that they will reward sites for using HTTPS/SSL. Due to Google’s dominance in the search engine space it may be safe to assume that HTTPS will improve your rank. However a conflicting ranking variable is site loading speed. HTTPS can prevent intermediate caching and requires a more lengthy handshake to complete a transaction. This is the reason that Google developed the SPDY protocol. So now we have three options to which it is not clear what mix of speed and safety will achieve the greatest respect in Google’s algorithm. That is just one search engine, other portals or engines may analyze your site very differently. Personally I have not found any statistically significant difference between HTTP or HTTPS with regards to search engine ranking.

For more practical advice it is always best to consider what produces the best customer experience. It should be assumed that this is the goal of both the portal and site. So let’s breakdown a few features that improve site usability: speed, portal navigation, site navigation, content quality, and traffic quality.

Speed is easy enough to measure. Google provides tools to analyze your site speed and find ways to improve. Good advice is simply engineering your site well and using a CDN.

Portal navigation depends on what context your site will be appearing. The two most common contexts would be Google and Facebook. For both it is good advice to have a readable title and page meta description. For Facebook a good image is also helpful. Mobile consideration is also important, and again Google has a tool to help with that.

Site navigation is up to your designer. For crawler accessibility it is also important to make all your public links such that the spider won’t wrap itself into a loop. Also unique pages should not be hidden behind URL parameters.

Content Quality is probably the most important of all considerations, but also has no singular solution. Track your users and make sure that your site is engaging and users are staying on the site for longer and returning. Search engines have lots of information regarding entry/exit, so good traffic patterns are rewarded.

Traffic Quality is simple enough. Don’t spam and you won’t get spammed. Don’t buy traffic and avoid fraudulent clicks. Moz has a tool to help see how reliable your upstream links are.

I purposely left out Page Rank, because it has been reported as less and less of a rank predictor. Google is moving to new algorithms that look more at consumer experience and less at in-bred linking schemes.

That is all I can think of for now. If anyone has tips feel free to leave them in the comments below.


Search Engine Optimization en vogue was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

12 Lessons Learned from 12 Rejections Submitting Actions on Google

Par Thomas Wiradikusuma

During one of the Google I/O 2017 talks, Google announced a bot-building competition, Actions on Google Developer Challenge. Actions on Google are apps for Google Assistant, similar to what Alexa Skills are for Amazon Alexa.

At the time, I was building Sarah Shopper, a price tracker bot for Amazon, IKEA and some other ecommerce. It was for Messenger, but since the typical interaction with the bot is fire-and-forget, making it as an action for Google Assistant seems like a good idea. I can kill two birds with one bot, why not?

I’ve had experience building bots for Messenger and LINE, so I thought it should be a straightforward “build-submit-profit” with slight platform-specific differences. Boy was I wrong. After having had my submissions rejected for a total of 12 times over the course of few weeks, these are my lessons learned:

1. Release early, but not until it’s ready

The reason is, the review process can take literally a week. In comparison, it takes 2–3 days for Messenger and zero days for LINE.

If you have set a certain launch date (or joining a bot competition), you better release early. Make sure your bot complies to all Actions’ policies, because if it’s rejected (and it will), you’ll need to wait again.

“I don’t mind waiting for a week,” you said. Well…

2. Rejection doesn’t tell you the whole picture

If you app is rejected, the rejection email won’t list down everything that’s wrong with your app. It will mention some issues, which you will fix, that you will then resubmit, and you will get another rejection. Rinse and repeat.

The reviewers probably “stop on first error”, so the best way to avoid this is to comply to the policies to the dot. You don’t want to wait for another week, don’t you?

So you’ve made sure everything is followed, and you start porting your Messenger bot, but then you realize that…

3. Bots can only reply

Bots on LINE can push, and so does Messenger (with restrictions). On Assistant, there is no way to “return call”—you can only reply to user’s conversation. It must be within few seconds, and you can only send two chat bubbles max.

It’s kind of pointless to ask Sarah Shopper to watch certain websites but the users still need to call her from time to time. The only way to give notification is through email, but…

4. Bots are not allowed to ask users their email

If you do, your submission will be rejected. To get user’s email, use Google Login through Account Linking. Ideally, you want to defer login only when you need it (in this case, when you need the email), but…

5. At the moment, you can’t login mid-conversation

Either you require user to log in up-front (even before saying a word to your bot) or wait until the feature is implemented. I don’t have the patience to wait, so I simply require my user to log in up-front.

If only it’s that easy…

6. You need to have your own OAuth2 provider

Actions on Google uses OAuth2, and so does Google Login. Let’s connect the two! The login flow would be:

  1. User summons your bot (“Talk to Sarah Shopper”)
  2. Google Assistant presents login button
  3. User logs in, which means clicking the button and choosing which Google Account to use
  4. Your bot receives your email

Well, not quite. You need to have your own OAuth2 provider, where you can then tell your users to login to Google.

So they click login, and click another login, and… done? Well…

7. After you’re logged in, you need to call the bot again

You heard it right. I’m probably doing it wrong, but the documentation is not exactly clear.

There’s another login mechanism called Seamless/Streamlined/Quick (it has different name depending on which part of documentation you read) Account Linking, but I haven’t managed to make it work.

Alright. Account linked, email retrieved. Done? No, I still have my app rejected a few times. Apparently…

8. Bots must never “leave the mic open”

All of your bot’s final reply must either end the session (your bot stops and user returns to Assistant) or ask what the user want next. You should also provide suggestion chips (pre-canned replies that user can tap to reply). It’s like chatting with a clingy friend.

So you double-checked all conversation branches, and there’s no more open mic. Done? Well, for Sarah Shopper, you need to give her links to products you want her to watch.

9. There’s a limit on how many characters Google Assistant can accept

Amazon links can be very long (around 300 chars). A reviewer gave Sarah Shopper such link, and the chat session got abruptly terminated with a vague message, “Sarah Shopper isn’t responding right now. Try again soon.” Obviously there’s nothing I can do here, so I appealed and resubmitted.

10. Don’t get too creative with names

In-between submissions, I created another bot that uses SSML. To save money, I use one of my parked domains, syurprise.com. Yup, the word “surprise” with “y”. Don’t ask why, it was 17 years ago(!).

Apparently, one-word names are not allowed. I could have appealed, but I’ve lost my patience I just wanted it published asap. I put Syurprise Trivia instead.

I set Syurprise to be pronounced “surprise” just like Flickr is pronounced “flicker” (instead of “flick-ar”), but it was rejected because “Your name and pronunciation are too different. Your pronunciation should only differ in basic ways such as spaces, punctuation, and phonetic spelling”. I appealed. For fun, there’s an easter egg in my bot when you ask it how to pronounce its name.

11. Your bot can be published before you know it

Both Sarah Shopper and Syurprise Trivia are now published. Are they? The admin console says they’re still Publishing.

My wife can summon them from her Pixel phone. Maybe they just want to make me happy after weeks of waiting. Can you summon them?

12. Google does have free, human, support!

This last point is actually a praise. When I received the first rejection email, I almost felt hopeless (“I’m going to talk to a wall”), but they do have human support and they’re quite responsive and knowledgable.

Thanks for reading! I’d really appreciate it if you recommend this post (by clicking the 👏 button) so other people can find it.

P.S. If you’re building a chatbot, please consider joining my mailing list, probotdev.com, where I share actionable insights on bot-building for various platforms and industries.


12 Lessons Learned from 12 Rejections Submitting Actions on Google was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Google May be Guilty of Gender Discrimination But Is Definitely Guilty of Bad Data Analysis

Par Jodi Beggs

In case you haven’t heard, Google is the target of a class-action lawsuit regarding gender discrimination. (Shocking, I know, given what we know about Silicon Valley more generally.) Part of the impetus for the lawsuit is an employee-led effort to collect compensation data that shows that men are paid more than women at the company. Interestingly, however, this data in itself doesn’t say a whole lot about whether discrimination exists. (Don’t stop reading, I’m not on team Google here, just on team “using math properly.”)

From a data perspective, proving discrimination can be somewhat difficult. For example, we hear the often-quoted “women make 77 cents for every dollar a man makes” statistic, but this number doesn’t really tell us much, since it could very well be the case that women sort into lower-paying occupations and jobs of their own volition, choose to work fewer hours, and so on. (On the other hand, we can’t rule out the discrimination hypothesis either using this number, and the reality is somewhere in the middle.) Ideally, what one would do to look for discrimination would be to compare otherwise equivalent men and women and see whether compensation differences still exist within the matched groups. Mathematically, this is essentially what economists do when they run a regression with “control variables”- variables that suck up the pay differences that are accounted for by stuff other than gender in order to estimate an “all else being equal” type of effect- in this case, the effect of being female.

Google employees seem to be up on their applied math, since they put together an analysis so that they could make the following statement:

Based upon its own analysis from January, Google said female employees make 99.7 cents for every dollar a man makes, accounting for factors like location, tenure, job role, level and performance.

On the surface, this seems to suggest that significant gender discrimination just doesn’t show up in the data. BUT…and this is important…this example highlights the difference between doing math and doing data analysis (or, more charitably, data science)- while this conclusion may be mathematically correct, it’s basically a “garbage in, garbage out” use of econometric tools. Simply put, if you’re trying to isolate gender discrimination, you can’t just blindly control for things that themselves are likely the result of gender discrimination! It’d be like looking at the impact of diet on health and using weight as a control variable- sure, you’d get an “all else being equal” sort of result, but it wouldn’t make sense since weight is likely a step in the chain between diet and health outcomes. (In other words, the analysis would estimate the effect of a particular diet compared to a person who didn’t follow the diet but ended up weighting the same as the person who did, which is probably not the comparison that you want to make.)

If you don’t believe me, perhaps a labor economist and an econometrics text will convince you:

Dear Google, Occupation controls are literally the textbook example of how not to measure wage discrimination. Sincerely, Labor Economists

 — @SallyLHudson

In this way, Google tipped its hand quite a bit regarding the particular nature of gender discrimination at the company- if men and women are paid the same once job title and performance reviews are taken into account, then gender discrimination (if it exists) is taking place either by herding women into jobs with different roles/levels or showing anti-female (or pro-male) bias in performance reviews. (Also, if the “levels” have set pay bands, which the source article kind of suggests, controlling for level largely amounts to assuming the conclusion.)

Turns out my suspicions are pretty on point, given the specific claim of the lawsuit. It’s amazing what you can learn from data IF you look at it properly. In a semi-previous life, I worked as an economic consultant, which basically means that I helped prepare expert testimony to be used in lawsuits involving economic matters. What I wouldn’t give to be the expert witness who gets to offer up a rebuttal to Google’s crap econometrics here.


Google May be Guilty of Gender Discrimination But Is Definitely Guilty of Bad Data Analysis was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Startup Seeks “Rockstar Coder,” Accidentally Hires Guitarist

Par The Byte

When local startup Dis/rupt.ly put out a job posting for a “rockstar” full stack developer, they weren’t expecting to get an actual rockstar.

“My buddy Kate, who’s the best engineer I know, told me that her friend Sam [Smith] was the perfect fit for the job, so I hired him immediately,” said founder Ryan White. “I figured she meant Sam was a top-notch coder, but turns out she meant he’s literally in a rock band.”

When asked about his new job, Smith seemed very enthusiastic. “I’ve had to work a lot to catch up to the other developers, but I can’t complain — the pay and benefits are great,” he said. “I’m especially pumped about the unlimited vacation. I’m going on tour with my band for a couple months soon, so I’ll definitely be taking advantage of that.”

At press time, White was seen deleting “Wanted: Ninja Programmers” from another Dis/rupt.ly job posting.


Startup Seeks “Rockstar Coder,” Accidentally Hires Guitarist was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Understanding the IPFS White Paper part 2

Par Mark Pors

This article is part 5 of the Blockchain train journal, start reading here: Catching the Blockchain Train.

The IPFS White Paper: IPFS Design

The IPFS stack is visualized as follows:

or with more detail:

I borrowed both images from presentations by Juan Benet (the BDFL of IPFS).

The IPFS design in the white paper goes more or less through these layers, bottom-up:

The IPFS Protocol is divided into a stack of sub-protocols responsible for different functionality:
1. Identities — manage node identity generation and verification.
2. Network — manages connections to other peers, uses various underlying network protocols. Configurable.
3. Routing — maintains information to locate specific peers and objects. Responds to both local and remote queries. Defaults to a DHT, but is swappable.
4. Exchange — a novel block exchange protocol (BitSwap) that governs efficient block distribution. Modelled as a market, weakly incentivizes data replication. Trade Strategies swappable.
5. Objects — a Merkle DAG of content-addressed immutable objects with links. Used to represent arbitrary data structures, e.g. file hierarchies and communication systems.
6. Files — versioned file system hierarchy inspired by Git.
7. Naming — A self-certifying mutable name system.

Here’s my alternative naming of these sub-protocols:

  1. Identities: name those nodes
  2. Network: talk to other clients
  3. Routing: announce and find stuff
  4. Exchange: give and take
  5. Objects: organize the data
  6. Files: uh?
  7. Naming: adding mutability

Let’s go through them and see if we can increase our understanding of IPFS a bit!

Identities: name those nodes

IPFS is a P2P network of clients; there is no central server. These clients are the nodes of the network and need a way to be identified by the other nodes. If you just number the nodes 1,2,3,… anyone can add a node with an existing ID and claim to be that node. To prevent that some cryptography is needed. IPFS does it like this:

  • generate a PKI key pair (public + private key)
  • hash the public key
  • the resulting hash is the NodeId

All this is done during the init phase of a node: ipfs init > the resulting keys are stored in ~/.ipfs/config and returns the NodeId.

When two nodes start communicating the following happens:

  • exchange public keys
  • check if: hash(other.PublicKey) == other.NodeId
  • if so, we have identified the other node and can e.g. request for data objects
  • if not, we disconnect from the “fake” node

The actual hashing algorithm is not specified in the white paper, read the note about that here:

Rather than locking the system to a particular set of function choices, IPFS favors self-describing values. Hash digest values are stored in multihash format, which includes a short header specifying the hash function used, and the digest length in bytes.
Example:
<function code><digest length><digest bytes>
This allows the system to (a) choose the best function for the use case (e.g. stronger security vs faster performance), and (b) evolve as function choices change. Self-describing values allow using different parameter choices compatibly.

These multihashes are part of a whole family of self-describing hashes, and it is brilliant, check it out: multiformats.

Network: talk to other clients

The summary is this: IPFS works on top of any network (see the image above).

Interesting here is the network addressing to connect to a peer. IPFS uses multiaddr formatting for that. You can see it in action when starting a node:

Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip4/172.17.0.1/tcp/4001
Swarm listening on /ip4/185.24.123.123/tcp/4001
Swarm listening on /ip6/2a02:1234:9:0:21a:4aff:fed4:da32/tcp/4001
Swarm listening on /ip6/::1/tcp/4001
API server listening on /ip4/127.0.0.1/tcp/5001
Gateway (read-only) server listening on /ip4/0.0.0.0/tcp/8080

Routing: announce and find stuff

The routing layer is based on a DHT, as discussed in the previous episode, and its purpose is to:

  • announce that this node has some data (a block as discussed in the next chapter), or
  • find which nodes have some specific data (by referring to the multihash of a block), and
  • if the data is small enough (=< 1KB) the DHT stores the data as its value.

The command line interface and API don’t expose the complete routing interface as specified in the white paper. What does work:

# tell the DHT we have this specific content:
$ ipfs dht provide QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG
# ask for peers who have the content:
$ ipfs dht findprovs QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG
QmYebHWdWStasXWZQiXuFacckKC33HTbicXPkdSi5Yfpz6
QmczCvhy6unZEVC5ukR3BC3BPxYie4jBeRApTUKq97ZnEo
QmPM3WzZ3q1SR3pzXuGPHD7e6z3eWEWCnAvrNw7Wegyc8o
QmPKSqtkDmYsSxQbqNkZG1AmVnZSFyv5WF7fFE2YebNBFG
QmPMJ3HLLLBtW53R2YixhVHmVb3pkR2bHni3qbqq23vBSv
QmPNHJJphV1TB6Z99L6r9Y7bKaCBUQ67X17hicnEmsgWDJ
QmPNhiqGg81o2Perk2i7VNvvVuuLLUMKDxMNwVauP8r5Yv
QmPQJRgP3Vxi52Ho7HfnYdiCRJTRM1TXwgEnyjcwcLuKfb
QmNNxr1ZoyPbwNe2CvYz1CVyvSNWsE8WNwDWQ9t9BDjnj5
QmNT744VjtRFpDYB25EVLx7ha1zAVDKsd3qFjxfQLjPEXq
QmNWwGRWTYeut6qvKDhJBuEJZnbqMPMfuF81MPvHvPBX89
QmNZM5NmzZNPkvH2kPXDYNAB1cAeBNfxLyM9B1crgt3VeJ
QmNZRDzSJybdf4rmt972SH4U9TF6sEK8q2NSEJpEt7SkTp
QmNZdBUV9QXytVcPjcYM8i9AG22G2qwjZmh4ZwpJs9KvXi
QmNbSJ9okrwMphfjudiXVeE7QWkJiEe4JHHiKT8L4Pv7z5
QmNdqMkVqLTsJWj7Ja3oKwLNWcAYUkRjSZPg22B7rvKFMr
QmNfyHTzAetJGBFTRkXXHe5om13Qj4LLjd9SDwJ87T6vCK
QmNmrRTP5sJMUkobujpVXzzjpLACBTzf9weND6prUjdstW
QmNkGG9EZrq699KnjbENARLUg3HwRBC7nkojnmYY8joBXL
QmP6CHbxjvu5dxdJLGNmDZATdu3TizkRZ6cD9TUQsn4oxY
# Get all multiaddr's for a peer
$ ipfs dht findpeer QmYebHWdWStasXWZQiXuFacckKC33HTbicXPkdSi5Yfpz6
/ip4/192.168.1.14/tcp/4001
/ip6/::1/tcp/4001
/ip4/127.0.0.1/tcp/4001
/ip4/1.2.3.4/tcp/37665

ipfs put and ipfs get only work for ipns records in the API. Maybe storing small data on the DHT itself was not implemented (yet)?

Exchange: give and take

Data is broken up into blocks, and the exchange layer is responsible for distributing these blocks. It looks like BitTorrent, but it's different, so the protocol warrants its own name: BitSwap.

The main difference is that wherein BitTorrent blocks are traded with peers looking for blocks of the same file (torrent swarm), in BitSwap blocks are traded cross-file. So one big swarm for all IPFS data.

BitSwap is modeled as a marketplace that incentivizes data replication. The way this is implemented is called the BitSwap Strategy, and the white paper describes a feasible strategy and also states that the strategy can be replaced by another strategy. One such a bartering system can be based on a virtual currency, which is where FileCoin comes in.

Of course, each node can decide on its own strategy, so the generally used strategy must be resilient against abuse. When most nodes are set up to have some fair way of bartering it will work something like this:

  • when peers connect, they exchange which blocks they have (have_list) and which blocks they are looking for (want_list)
  • to decide if a node will actually share data, it will apply its BitSwap Strategy
  • this strategy is based on previous data exchanges between these two peers
  • when peers exchange blocks they keep track of the amount of data they share (builds credit) and the amount of data they receive (builds debt)
  • this accounting between two peers is kept track of in the BitSwap Ledger
  • if a peer has credit (shared more than received), our node will send the requested block
  • if a peer has debt, our node will share or not share, depending on a deterministic function where the chance of sharing becomes smaller when the debt is bigger
  • a data exchange always starts with the exchange of the ledger, if it is not identical our node disconnects

So this is set up kind of cool I think: game theory in action! The white paper further describes some edge cases like what to do if I have no blocks to barter with? The answer is simply to collect blocks that your peers are looking for, so you have something to trade.

Now let’s have a look how we can poke around in the innards of the BitSwap protocol.

The command-line interface has a section blocks and a section bitswap; those sound relevant :)

To see bitswap in action, I’m going to request a large file Qmdsrpg2oXZTWGjat98VgpFQb5u1Vdw5Gun2rgQ2Xhxa2t which is a video (download it to see what video!):

# ask for the file
$ ipfs get Qmdsrpg2oXZTWGjat98VgpFQb5u1Vdw5Gun2rgQ2Xhxa2t
# in a seperate terminal, after requesting the file, I inspect the "bitswap wantlist"
$ ipfs bitswap wantlist
QmYEqofNsPNQEa7yNx93KgDycmrzbFkr5oc3NMKXMxx5ff
QmUmDEBm9a8MYyqRdb3YQnoqPmqAo4cEWdKQErirFJdSWD
QmY5VJPbsRZzFCTMrFBx2qtZiyyeLhsjBysyfC1fx2gE9S
QmdbzYgyhqUNCNL8xU2HTSKwao1ck2Gmi5U1ygjQuJd92b
QmbZDe5Dcv9mJr8fiqp5aJL2cbyu64tgzwCS2Vy4P3krCL
QmRjzMzVeYRE5b6tDF3sTXMV1sTffno92uL3WwuFavBrWQ
QmPavzEJQw8atvErXQis6C6GF7DRFbb95doAaFkHe9M38u
QmY9fs1Pkr3nV7RkbGdfGh3q8HuKtMMCCUp22AAbwPYnrS
QmUtxZkuJuyydd124Z2cfx6jXMAMpcXZRF96QMAsXc2y6c
QmbYDTJkmLqMm6ojdL6pLP7C8mMVfVPnUxn3yp8HzXDcXf
QmbW9MZ7cwn8svpixosAuC7GQmUXDTZRuxJ8dJp6HyJzCS
QmdCLGWsYQFhi9y3BmkhUreX2S799iWGyJqvnbK9dzB55c
Qmc7EvnBPf2mPCUCfvjcsaQGLEakBbUN9iycnyrLF3b2or
Qmd1mNnDQPf1BAjFqDHjiLe4g4ZFPAheQCniYkbQPosjDE
QmPip8XzQhJFd487WWw7D8aBuGLwXtohciPtUDSnxpvMFR
QmZn5NAPEDtptMb3ybaMEdcVaoxWHs7rKQ4H5UBcyHiqTZ
.
.
.
# find a node where we have debt
$ ipfs dht findprovs Qmdsrpg2oXZTWGjat98VgpFQb5u1Vdw5Gun2rgQ2Xhxa2t
QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3
QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z
QmUh2KnjAvgEbJFSd5JZws4CNvt6LbC4C1sRpBgCbZQiqD
Qmc9pBLfKSwWboKHMvmKx1P7Z738CojuUXkPA1dsPrvSw2
QmZFhGyS2W833nKKkbqZAU2uSvBbWUytDJkKBHimwRmhd6
QmZMxNdpMkewiVZLMRxaNxUeZpDUb34pWjZ1kZvsd16Zic
Qmbut9Ywz9YEDrz8ySBSgWyJk41Uvm2QJPhwDJzJyGFsD6
# try one to see if we have downloaded from that node
$ ipfs bitswap ledger QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3
Ledger for <peer.ID SoLMeW>
Debt ratio: 0.000000
Exchanges: 11
Bytes sent: 0
Bytes received: 2883738

Thank you QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3; what a generous peer you are!

Now, have a look at the block commands:

# Let's pick a block from the wantlist above
$ ipfs block stat QmYEqofNsPNQEa7yNx93KgDycmrzbFkr5oc3NMKXMxx5ff
Key: QmYEqofNsPNQEa7yNx93KgDycmrzbFkr5oc3NMKXMxx5ff
Size: 262158
$ ipfs block get QmYEqofNsPNQEa7yNx93KgDycmrzbFkr5oc3NMKXMxx5ff > slice_of_a_movie
# results in a binary file of 262 KB

We’ll have another look at how blocks fit in in the next chapter.

The three layers of the stack we described so far (network, routing, exchange) are implemented in libp2p.

Let’s climb up the stack to the core of IPFS…

Objects: organize the data

Now it gets fascinating. You could summarize IPFS as: Distributed, authenticated, hash-linked data structures. These hash-linked data structures are where the Merkle DAG comes in (remember our previous episode?).

To create any data structure, IPFS offers a flexible and powerful solution:

  • organize the data in a graph, where we call the nodes of the graph objects
  • these objects can contain data (any sort of data, transparent to IPFS) and/or links to other objects
  • these links — Merkle Links - are simply the cryptographic hash of the target object

This way of organizing data has a couple of useful properties (quoting from the white paper):

1. Content Addressing: all content is uniquely identified by its multihash checksum, including links.
2. Tamper resistance: all content is verified with its checksum. If data is tampered with or corrupted, IPFS detects it.
3. Deduplication: all objects that hold the exact same content are equal, and only stored once. This is particularly useful with index objects, such as git trees and commits, or common portions of data.

To get a feel for IPFS objects, check out this objects visualization example.

Another nifty feature is the use of unix-style paths, where a Merkle DAG has the structure:

/ipfs/<hash-of-object>/<named-path-to-object

We’ll see an example below.

This is really all there is to it. Lets see it in action by replaying some examples from the quick-start:

$ mkdir foo
$ mkdir foo/bar
$ echo "baz" > foo/baz
$ echo "baz" > foo/bar/baz
$ tree foo/
foo/
├── bar
│ └── baz
└── baz
$ ipfs add -r foo
added QmWLdkp93sNxGRjnFHPaYg8tCQ35NBY3XPn6KiETd3Z4WR foo/bar/baz
added QmWLdkp93sNxGRjnFHPaYg8tCQ35NBY3XPn6KiETd3Z4WR foo/baz
added QmeBpzHngbHes9hoPjfDCmpNHGztkmZFRX4Yp9ftKcXZDN foo/bar
added QmdcYvbv8FSBfbq1VVSfbjLokVaBYRLKHShpnXu3crd3Gm foo
# the last hash is the root-node, we can access objects through their path starting at the root, like:
$ ipfs cat /ipfs/QmdcYvbv8FSBfbq1VVSfbjLokVaBYRLKHShpnXu3crd3Gm/bar/baz
baz
# To inspect an object identified by a hash, we do
$ ipfs object get /ipfs/QmdcYvbv8FSBfbq1VVSfbjLokVaBYRLKHShpnXu3crd3Gm
{
"Links":[
{
"Name":"bar",
"Hash":"QmeBpzHngbHes9hoPjfDCmpNHGztkmZFRX4Yp9ftKcXZDN",
"Size":61
},
{
"Name":"baz",
"Hash":"QmWLdkp93sNxGRjnFHPaYg8tCQ35NBY3XPn6KiETd3Z4WR",
"Size":12
}
],
"Data":"\u0008\u0001"
}
# The above object has no data (except the mysterious \u0008\u0001) and two links
# If you're just interested in the links, use "refs":
$ ipfs refs QmdcYvbv8FSBfbq1VVSfbjLokVaBYRLKHShpnXu3crd3Gm
QmeBpzHngbHes9hoPjfDCmpNHGztkmZFRX4Yp9ftKcXZDN
QmWLdkp93sNxGRjnFHPaYg8tCQ35NBY3XPn6KiETd3Z4WR
# Now a leaf object without links
$ ipfs object get /ipfs/QmdcYvbv8FSBfbq1VVSfbjLokVaBYRLKHShpnXu3crd3Gm/bar/baz
{
"Links":[
  ],
"Data":"\u0008\u0002\u0012\u0004baz\n\u0018\u0004"
}
# The string 'baz' is somewhere in there :)

The Unicode characters that show up in the data field are the result of serialization of the data. IPFS uses protobuf for that I think. Correct me if I’m wrong :)

At the time I’m writing this there is an experimental alternative for the ipfs object commands: ipfs dag:

$ ipfs dag get QmdcYvbv8FSBfbq1VVSfbjLokVaBYRLKHShpnXu3crd3Gm
{
"data":"CAE=",
"links":[
{
"Cid":{
"/":"QmeBpzHngbHes9hoPjfDCmpNHGztkmZFRX4Yp9ftKcXZDN"
},
"Name":"bar",
"Size":61
},
{
"Cid":{
"/":"QmWLdkp93sNxGRjnFHPaYg8tCQ35NBY3XPn6KiETd3Z4WR"
},
"Name":"baz",
"Size":12
}
]
}
$ ipfs dag get /ipfs/QmdcYvbv8FSBfbq1VVSfbjLokVaBYRLKHShpnXu3crd3Gm/bar/baz
{
"data":"CAISBGJhegoYBA==",
"links":[
  ]
}

We see a couple of differences there, but let’s not get into that. Both outputs follow the IPFS object format from the white paper. One interesting bit is the “Cid” that shows up; this refers to the newer Content IDentifier.

Another feature that is mentioned is the possibility to pin objects, which results in storage of these objects in the file system of the local node. The current go implementation of ipfs stores it in a leveldb database under the ~/.ipfs/datastore directory. We have seen pinning in action in a previous post.

The last part of this chapter mentions the availability of object level encryption. This is not implemented yet: status wip (Work in Progress; I had to look it up as well). The project page is here: ipfs keystore proposal.

The ipfs dag command hints to something new...

Intermission: IPLD

If you studied the images at the start of this post carefully, you are probably wondering, what is IPLD and how does it fit in? According to the white paper, it doesn’t fit in, as it isn’t mentioned at all!

My guess is that IPLD is not mentioned because it was introduced later, but it more or less maps to the Objects chapter in the paper. IPLD is broader, more general, than what the white paper specifies. Hey Juan, update the white paper will ya! :-)

If you don’t want to wait for the updated white paper, have a look here: the IPLD website (Inter Planetary Linked Data), the IPLD specs and the IPLD implementations.

And this video is an excellent introduction: Juan Benet: Enter the Merkle Forest.

But if you don’t feel like reading/watching more: IPLD is more or less the same as what is described in the “Objects” and “Files” chapters here.

Moving on to the next chapter in the white paper…

Files: uh?

On top of the Merkle DAG objects IPFS defines a Git-like file system with versioning, with the following elements:

  • blob: there is just data in blobs and it represents the concept of a file in IPFS. No links in blobs
  • list: lists are also a representation of an IPFS file, but consisting of multiple blobs and/or lists
  • tree: a collection of blobs, lists and/or trees: acts as a directory
  • commit: a snapshot of the history in a tree (just like a git commit).

Now I hear you thinking: aren’t these blobs, lists, and trees the same things as what we saw in the Mergle DAG? We had objects there with data, with or without links, and nice Unix-like file paths.

I heard you thinking that because I thought the same thing when I arrived at this chapter. After searching around a bit I started to get the feeling that this layer was discarded and IPLD stops at the “objects” layer, and everything on top of that is open to whatever implementation. If an expert is reading this and thinks I have it all wrong: please let me know, and I’ll correct it with the new insight.

Now, what about the commit file type? The title of the white paper is "IPFS - Content Addressed, Versioned, P2P File System", but the versioning hasn't been implemented yet it seems.

There is some brainstorming going on about versioning here and here.

That leaves one more layer to go…

Naming: adding mutability

Since links in IPFS are content addressable (a cryptographic hash over the content represents the block or object of content), data is immutable by definition. It can only be replaced by another version of the content, and it, therefore, gets a new “address”.

The solution is to create “labels” or “pointers” (just like git branches and tags) to immutable content. These labels can be used to represent the latest version of an object (or graph of objects).

In IPFS this pointer can be created using the Self-Certified Filesystems I described in the previous post. It is named IPNS and works like this:

  • The root address of a node is /ipns/<NodeId>
  • The content it points to can be changed by publishing an IPFS object to this address
  • By publishing, the owner of the node (the person who knows the secret key that was generated with ipfs init) cryptographically signs this "pointer".
  • This enables other users to verify the authenticity of the object published by the owner.
  • Just like IPFS paths, IPNS paths also start with a hash, followed by a Unix-like path.
  • IPNS records are announced and resolved via the DHT.

I already showed the actual execution of the ipfs publish command in the post Getting to know IPFS.

This chapter in the white paper also describes some methods to make addresses more human-friendly, but I’ll leave that in store for the next episode which will be hands-on again. We gotta get rid of these hashes in the addresses and make it all work nicely in our good old browsers: Ten terrible attempts to make IPFS human-friendly.

Let me know what you think of this post by tweeting to me @pors or leave a comment below!


Understanding the IPFS White Paper part 2 was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Node.js Weekly Update — 22 September, 2017

Par RisingStack

Below you can find RisingStack’s collection of the most important Node.js updates, projects & tutorials from this week:

DIY Object Recognition with Raspberry Pi, Node.js, & Watson

A glorious thing nowadays is that you needn’t be an AI researcher to leverage machine learning.

In this posr you will learn how to roll your own custom object recognition solution with Raspberry Pi, Node.js, and Watson.

Mastering the Node.js Core Modules — The Process Module

In this article, you will take a look at the Node.js Process module, and what hidden gems it has to offer. After you’ve read this post, you’ll be able to write production-ready applications with much more confidence.

Topics include:

  • process.on('uncaughtException') vs process.on('unhandledRejection'),
  • Node.js signal events, like SIGTERM and SIGUSR1,
  • Node.js exit codes.

npm v5.4.2 released

This is a small bug fix release wrapping up most of the issues introduced with 5.4.0.

Node.js Performance Monitoring — Debugging the Event Loop

In this blog post, you will take a deep dive into the Node.js event loop and learn how to diagnose and debug issues that stem from unoptimized JavaScript.

Typically you’re going to want your Node.js application to perform with low lag and high idle time in the event loop — this is usually a sign of an efficient application.

Modern JavaScript cheatsheet

This document is a cheatsheet for JavaScript you will frequently encounter in modern projects and in most contemporary sample code.

This guide is not intended to teach you JavaScript from the ground up, but to help developers with basic knowledge who may struggle to get familiar with modern codebases because of the JavaScript concepts used.

Previously in the Node.js Weekly Update

In the previous Node.js Weekly Update we read that how the Node.js module ecosystem should be rethought with keeping the browser in mind as well, as well as what’s new in Node.js 8.5.

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!

Originally published at community.risingstack.com on September 22, 2017.


Node.js Weekly Update — 22 September, 2017 was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Unsolicited Career Advice for Tech Marketers

Par Siddharth Deswal

TL;DR be a business person doing marketing, not a marketer doing marketing

When marketers ask me for advice about what they should do to further their careers, they ask about the skills they should acquire, and the functional areas they should specialize in.

Mostly, they receive advice from others that they should be a T-shaped marketer, i.e. have a broad set of marketing skills where they’re decently good, with a couple areas where they have deep experience and knowledge. I agree and propound this view. Over a period of 30 to 40 years of a professional career, functional/technical expertise is one of your best bets to grow. There’s research to indicate that reportees prefer bosses who could easily do their job, and Andy Grove, the legendary founder and CEO of Intel built a company where he fostered “knowledge power” over “position power”, something that you see all across the flat hierarchies in today’s tech startups.

However, I also think that most marketers end up becoming “just marketers”, and the same happens to folks in other departments who dedicate themselves to only their craft. They end up being just sales people, or just customer success people, frequently capping their growth to the Director or VP of their respective functions.

Instead, I recommend you become a businessperson who is currently handling marketing. By a businessperson I mean you understand your entire business deeply, you understand the different levers that make it what it is, the customer behavior, the delivery of value, the metrics, the industry, what keeps this entity up and ticking, and finally, you understand how your function contributes to it all.

I say this because I see marketers speak in meetings with leadership, and their ideas are not accepted, they’re not given the importance they feel they deserve, or they’re just a part of the side conversation. While the CEO is busy thinking about big-picture stuff like cash flow, burn rate, inside sales process, efficiency, competition, product strategy, the marketer limits himself by capping his thinking at “how do I increase my budget”, or “what if we tried to do more social media”.

Marketers are taught to understand their target audience, but most of them repeatedly fail to understand what is it that’s bothering their CEO. And if you want a seat at the boardroom sometime in the future, you have to start talking their language today.

Advice on how to become a ‘businessperson’

Start by understanding the big picture, and then get an excellent grasp on how that drills down to the day-to-day activities performed by your organization. Basically, understand the levers that run your business. This will necessitate that you understand some key metrics, and once you do, you’ll be able to quickly size up any business in your industry if you get to know their key metrics.

For example, if you know that a SaaS startup has an Annual Contract Value (ACV) of USD 3000, then you should be able to guess that an inside sales team doesn’t make sense for them, churn is an important number to contain, and most of their growth will be coming from new business being added every month, instead of upsells.

Similarly, if a firm has an ACV of USD 200,000, then they’re likely to have a field sales team, extremely strong account management, and a large part of their growth, if not most of it, will be from account expansion (getting existing customers to pay more over time).

Ask yourself why, and what form of marketing is needed today.

If it’s a bootstrapped SaaS, then you should be able to infer that the CEO won’t spend heavily on paid marketing channels, because she doesn’t have that kind of money. Instead, she’ll focus on low-cost inbound marketing, and maybe outbound email/phone prospecting. For her, profitability is more important than outright growth.

If it’s a consumer business that wants to grow fast, then they better have a marketing war chest to spend, or incredible word-of-mouth virality. Aside: differences between B2C and B2B marketing.

If the business is heavily VC funded, then it is basically a financial product, not for you or me, but for the VCs who invested in it. Everyone’s job is to get it to an exit event in a defined(ish) timeline. That event is heavily dependent on growth, so expect a lot of budget for marketing.

If it’s an enterprise play, then marketing’s job is to support Sales with full-funnel sales collateral, product marketing (testimonials, case studies, whitepapers, spec sheets, knowledgebase), and make an impact at events where your target decision makers congregate. Essentially, you want your very specific potential customers to know you, trust you enough to give Sales an opportunity to talk to them, and then be able to evangelize you internally to various stakeholders. Aside: read this interesting post by Mike Volpe on difference between Enterprise and SMB CMOs.

Essentially, when you look at a given situation, you should be able to guess if marketing is needed at all, and if yes, then what kind and form of marketing.

Understand the business you’re in as a system

Systems thinking is a management discipline that concerns an understanding of a system by examining the linkages and interactions between the components that comprise the entirety of that defined system.
Source

All businesses are complex systems, and these systems interact with each and other, and impact each other. As business people, it’s important that we marketers understand how these different systems affect each other, and whether things need correction or not.

1) A SaaS started out with a basic product that mostly catered to SMB customers. Over the years, competitors came in so they kept adding new features to stay ahead of the curve. So many features in fact, that the product isn’t meant for SMB customers anymore who prefer simple, easy-to-use tools. It is now better suited to larger customers with more detailed needs.

However, the brand, website, copy and collateral still retain that SMB positioning. Sales keeps trying to talk to enterprise prospects, but the conversion rate isn’t great, or the ACV isn’t as large as it could be. As a ‘businessperson’ marketer, you should be able to understand and rectify this mismatch.

2) Sales is incentivized to develop a new territory, but no marketing budget or team has been assigned to that new territory. Soon enough, Sales complains that it isn’t going as well as they hoped, and they need geography specific collateral and programs to generate demand.

Similar alignment issues crop up multiple times during lifetime of a business, and marketers must see the big picture, figure out how everything comes together, and push towards that.

Become friends with your head of Finance

When Wingify first hired a person to head up Finance, I thought ok, we probably need this role, but I wasn’t sure what value they’d provide. This was a classic “If you don’t think you need it, you haven’t seen greatness” moment for me. I’ve learnt tons from him since then about financial metrics, what they mean, and the underlying stories they tell. Understanding these metrics is one of the best ways to get a grasp the whole business.

While finance execs are hired a little later in the life of a company, if you have one in yours, I recommend to at least have monthly one-on-one’s with them to discuss the latest tech IPO S-1 and comparing those with others tech businesses. Or, you can get a headstart on this by going through Tom Tunguz’s benchmarking of 7 key SaaS metrics for various companies that have gone public.

If this is good stuff, I’d certainly appreciate you sharing it with your network:

Top image from https://pixabay.com/p-2108867/


Unsolicited Career Advice for Tech Marketers was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Here’s Why AI in Performance Marketing is Here to Scale

Par OnlineSales.ai
“Anything that seems rote or mechanical, there is no reason for humans to do — it’s all going to go to AI.”

-Dharmesh Shah, Co-founder, Hubspot

There has been a lot of conversation and exchange of opinions on Artificial Intelligence especially in the wake of recent exchange of fury going on between Elon Musk and Mark Zuckerberg, two of the biggest industry stalwarts against each other.

The lines have been drawn and many have started feeling the pressure of taking sides in what is being deemed as the decisive clash between humans and machines.

Ok, these dramatic spoils of war could be an after effect of you know what. You’ll have to excuse me for the ongoing GoT hangover. There is no war that we are facing as marketers.

Well, not yet.

Having said that, there is no denying the rise of AI or Machine Learning, if you will, in Performance Marketing and that it is not only here to stay, but to scale beyond our current imagination.

It’s better to be aware and ready for the changes that Artificial Intelligence is going to bring in our lives as Marketers.

Rise of Artificial Intelligence in Performance Marketing

The Groundwork: How Performance Marketing Came to be into Its Own

Google led the way to revolutionize the traditional ways of marketing and advertising with its introduction of online real-time bidding for ad space in SERPs.

Google’s efforts gave birth to what has now come to be known as Performance Marketing.

It brought in a very basic and immensely powerful paradigm shift and changed the game for advertisers, forever.

Around the same time, Amazon also revolutionized the retail commerce and converted it into e-commerce.

Do you remember what was it that ticked for Amazon to begin with? Product Recommendations, or Recommended Reads in its initial days.

Yup, you read it right. It has now moved to rocking AI-powered Dynamic Pricing.

Well, Google may have been one of the first players but certainly not the last.

But what is notable here is its unmatched dominance and the expansion into businesses like self-driven cars and Google glasses even after about 20 years of its existence.

And it seems like that’s going to be the case in our near future too with powerful data available to its platforms.

The Journey: From Traditional Marketing to Performance Marketing to Machine Learning

Advertisers are now fighting it out for a temporary and virtual ad space provided by Google instead of traditional ad real estate.

Instead of betting their full budgets on a campaign and then waiting for the value afterwards, advertisers now started seeking and getting values beforehand and during the campaigns.

Earlier, a failed campaign was a lost cause, often with colossal financials losses. So much so that some brands could never come out of it and be doomed if the campaign failed.

Google brought in on-going optimization in marketing and ad spends and an unprecedented control over it.

Credible data and analytics led decision making brought in accountability and made marketing RoI a reality.

This was a win-win for everyone involved : marketers and advertisers, publishers and even the audience.

Machine learning based programmatic buying is now leading the way for transcending Performance marketing into future with chatbots, lookalike audience modeling, real-time personalization, predictive analytics and anticipatory design etc.

Present Scenario: The World of Chatbots, Siri, Cortana and Voice Search

If you have watched 2015 science fantasy film, Ex Machina, you would remember how the owner of the tech-giant who made world’s biggest search engine, uses the massive data its search engine provides to choose the protagonist for his AI experiments.

Yes, that not-so-subtle reference to the power of massive data available to the world’s most powerful search engine.

The key was the process of analytics-led selection, decision making and personalization according to the preferences. Preferences, not even known to or recognized by the protagonist himself.

Well, at least that part is no longer a science fiction but very much the reality of performance today. And don’t mistake reference to the power of data available to the world’s largest search engine.

You might remember the first instance of Facebook’s primal chatbot to respond to audience queries on business pages, just about a couple of years ago. Now, Facebook is using AI to analyze footage collected by satellites and its own special aircrafts to map all human life on the planet.

What Facebook intends to do with this data about all humankind? Well, does Facebook’s Free Basics campaign ring a bell? Oh yes, this is the dream close to Mr Zuckerberg’s heart and he is not giving up on it. Now we get the point of his stance.

We all have experienced and are very well acquainted with voice-based searches, chatbots for various purposes, speech recognition (Google Voice Search, Siri, Cortana etc), content curation, product recommendations, clickbait headlines (listicles anyone?), ad targeting and heck even dynamic pricing.

Yes, all of it is machine-learning, all of it. And it is here not to just stay, but to scale.

Why it’s Here to Scale?

“The future of marketing might build itself.”
Click to tweet this

-Jeremy Waite, Evangelist at IBM Watson

Even though, Machine Learning, Artificial Intelligence, AI, Automation and Bots etc. have been the crazily trending buzzwords recently across the world, and the buzz just refuses to die down, much is still left to be known.

There is significant amount of speculation because either the information available is inadequate or half-baked. Then there are counterpoints to confuse the situation further.

Let’s check some of these questions that keep cropping up in discussions of AI:

  • Will AI take our jobs away?
  • Will humans be controlled by robots very soon?
  • How do I upgrade my skills to safeguard myself against the impending onslaught of Aritificial Intelligence?
  • Is Machine Learning here to stay?

And the list continues. Each of these questions are full blown topics worthy of separate deep discussions by subject matter experts.

However, what we can certainly say is that, going by the looks of it, AI is not just here to stay for good but here to scale. AI is the future of Marketing

Let’s consider some telling statistics

51% of companies are currently using Marketing automation. With more than half of B2B companies…
Click to tweet this

Source: Emailmonday

Only 14.4% of respondents are currently using predictive analytics, but 34.9% are considering…
Click to tweet this

Source: Heinz, Reachforce and research partners.

Marketing automation drives a 14.5% increase in sales productivity and a 12.2% reduction in…
Click to tweet this

Source: Nucleus Research

Some of the simple and subtle yet profound reasons that I personally feel are indicating that AI in Marketing is here to scale could include:

  • We’re already living in an analytics-dependent world
  • Our habits of using smart technologies have already shaped up way beyond our control
  • We’re seeking more contentment, we seek automation of mundane life
  • The world is busier than ever, we need our bots and virtual assistant to survive
  • Scale is what we need to succeed in today’s world; AI is what we need to scale
  • AI is shaping up our behaviors and consumptions patterns like never before

We could find tons of surveys and reports and statistics screaming to us that AI in Marketing is here to stay. And similarly, for the opposite too.

It doesn’t matter which side are we on. Only the future knows what it plans for us.

We can just look for the signs to gauge how the future is going to pan out and right now.

All signs are leading to the conclusion that AI in Marketing is here to scale, and the best we could do it to be ready and embrace it.

Machines are learning, learning fast and learning at scale.

It’s time we learn to harness the power of this immense learning, machine or otherwise.

Related Posts:

Why AI is the Future of Marketing

Cede Control — Accelerate with Marketing AI

Writing Your Own Product Recommender? You Need To Read This First

Recipe for Powerful YouTube Performance Marketing

Originally published at onlinesales.ai on September 22, 2017.


Here’s Why AI in Performance Marketing is Here to Scale was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Build Your Own React

Par Ofir Dagan

Build Your Own React — A Step By Step Guide

Abstract- In this post I’ll create a full working version of react step by step. It won’t be an efficient version but it give you a glimpse to how react works under the hood. If you’re familiar with react / web / dom etc.. ou can go ahead and skip to step 1.

Intro

Before writing react apps, like many others, I wrote in angular. The thing about angular is that if you want to be a good angular developer you need to know how angular works. The more you know, the better you’ll be. With react though, it doesn’t seems to be the case. You learn about class components and stateless components. You learn about props and state and that you should never change the props. Then you learn that in case you want something to be dynamic you call setState and that’s pretty much it. Excluding lifecycle methods and other small features there is not a lot more you need in order to write good react apps. There’s no doubt that the learning curve of react vs angular is much smaller.

But, I became a programmer because I like knowing how stuff works. What makes things tick. It doesn’t matter if it’s the international space station, tesla car or a vending machine. When I see something that interests me, soon then after I start questioning how it works. The same happened with react. I saw a great talk about react under the hood and I thought to myself.. That’s doesn’t look so complicated.. I can try it out.

What is react anyway?

Before we’ll dive in writing react we should know what is it that we want to build. As I see it, react is a library for building dynamic trees. In the web use case react will build a DOM tree. However it doesn’t have to be DOM. With react native, react builds a native app hierarchy of ui controllers. You could even write arduino apps in react.

What is a DOM tree anyway?

DOM stands for document object model. Every html document is built from elements that are called “dom elements”. The <html> is a dom element. Also <div> and <h1> etc. The tree has a root, usually the html tag. Every node in that tree has its own properties and it may have children which are also dom elements. For example. In the following image you can see a part of an html document and its representation as a tree of objects.

A few words about react and jsx

If you’re already familiar with react you probably use jsx. So a simple component can look like this:

const mySimpleComponent = () => {
return <h1>Hello World</h1>;
}

After running babel or your other favorite transpiler this component will look like this:

const mySimpleComponent = () => {
return React.createElement(‘h1’, null, `Hello World`);
}

A special babel plugin will transpile the jsx call to the underlying js api that react provides. In our case the <h1> tag got transpiled into a React.createElement call where the first argument is the tag name, the second argument is the properties and the third is its children.

Let’s look at a bit more complicated jsx example:

<div style={{color: yellow}}>
<Title text=`I'm a title`/>
<Person name=`moki`/>
</div>

Which will result in:

React.createElement(`div`, {color: `yellow`},
React.createElement(Title, {text: `I'm a title`}, null),
React.createElement(Person, {name: `moki`}, null)
);

Now that we’re in sync we can start building our own react.

Step 1- DOM elements (Hello World)

First, I’ll create a host html document for my app. It will load my react.js version, app.js for the app’s logic and it will have a div with id root to attach my react app to.

The way we’re going to build react will be in what I like to call ADD. It’s like TDD but instead of Test Driven Development it’s stands for Application Driven Development. In each step I’ll show the app.js that I would like to render. Then I’ll implement it in the react.js file.

My firstapp.js looks like this:

In order for this code to work I will need to implement React.createElement and ReactDOM.render

*ReactDOM is a separate module from react but for the sake of simplicity I’ll write them both together.

Let’s do this. Here’s myreact.js :

And we got an hello world working! You can see the result here.

Step 2 — Render non DOM elements or as we like to call them in react `components`

2.1- We want to add support for stateless components

app.js :

The only thing changed here is that now the element can be a function. In case it is, we’ll invoke it.

react.js :

*result

2.2- Not all children are born equally. We want to add support for non plain text children. Let’s refine the way we handle children.

app.js :

react.js :

*result

2.3- Add support for class components

app.js :

As we did with the stateless components. We will find out if it’s a class and in case it is, we’ll create a new class instance and call its render function.

react.js:

*result

2.4- It’s time for some good old refactor. Our anElement function is getting too long. Let’s extract some methods out. Also, let’s create a react-utils.js for miscellaneous functions such as isClass, isStatelessComponent etc..

Step 3- Props and State

To recap, up until now we rendered dom elements, stateless components and class components. Let’s add props and state to the mix

3.1- Stateless component props

app.js :

This one is easy. We just need to pass in the props to the component (function)

react.js :

*result

3.2- Class component props

app.js :

You should notice that the Hello class now extends React.Component. I did this so I will have a common parent class for all of my react classes for purposes such as assigning the props on the class instance.

Component class:

Now that we have the Component class, we’ll pass the props in the constructor and we’re done.

3.3- Attributes

This simple component should show an alert on button click.

app.js :

As with the children, we’ll iterate over the attributes and set event listeners to attributes that starts with on* and set attribute for the rest.

react.js :

*result

3.4- Refactor. handleHtmlElement became too big. We’ll extract out appendChild and appendProp functions. Also we can clean the code a little bit using lodash.

3.5- State

We’re ready to write a real react app. Let’s write a counter app. It will have two buttons to increment and decrement the counter and will present the current value

app.js :

To make this work we need to implement the setState function. My naive algorithm will delete the entire dom on every setState call and will render it again. For that I will need to save a reference to the root react element and to the root dom element.

react.js

Next I would like to save the classes that I’ve created so when reRender happens I won’t create them again and lose my state. To achieve this I will do the following:

  • Hold a cache of the instantiated classes
  • When handling a class. In case of a match, return the cached instance
  • Instead of calling render straight away return an object with a special flag that marks it as a react class.
  • When appending a child node. If it’s a react class call its render

handleClass :

handleChild :

*result

That’s it. We have our own working react. Don’t believe me? Check out this todo app that I took from the web. I didn’t changed anything except for the references to the real react and react-dom.

The todo app doesn’t impress you much? How about this minesweeper game?

Now what?

Let’s reflect on what I’ve showed you a little bit. The react version I wrote… It doesn’t seem super efficient. On every call to setState I clear the entire dom and creates it from scratch. Also the first thing you hear when someone talks about how react works is the mysterious virtual dom. I didn’t write anything virtual in my code nor did I mention it.

Time to face the truth

Time for some performance analysis. I timed the time for first render of the minesweeper game both with my react and with the real react. The results are going to surprise you.

It seems that my react is doing about 2.5 times better than the real react. But let’s hold our horses for a minute. If we continue to play we can see the time it takes for every setState to finish. With my react it takes between 2–8 ms as opposed to 0.01ms for the real react. This difference is why we love react and use it so often. Besides being super intuitive and easy to learn, it’s very fast and keep getting faster.

The react algorithm

First my algorithm: on every re-render I

  • Clear the dom
  • Render the DOM from scratch

If we look at the code we will see that everything we wrote was javascript. We didn’t use any html or other languages. So what if, instead of creating the dom elements straight away we will keep a tree model of the dom made by javascript objects. It’s very easy to represent our DOM as a javascript object right? We saw that in the beginning.

So a better algorithm will be:

  • Call render on the js tree model
  • Read the current dom
  • Figure out the changes
  • Apply only these changes on the dom

This will make things faster. But we’re smarter than that. We want to keep our reads and writes to the dom to as minimal as we can. We already have a js tree model (let’s call it virtual dom from now) that reflects the way our dom should look like.

So we can have an even better algorithm. On re-render, the idea here is to have two virtual doms one always represents the current dom and the other will represent the future dom. On reRender:

  • Create a new virtual dom
  • Figure out the difference between it and the current virtual dom (diffing between two js tress can be done pretty fast)
  • Apply only the changes on the real dom

The process of diffing and merging the diffs into the real dom tree is called reconciliation. And this is how react really works.

Fiber

You might have heard about fiber. Fiber is the new reconciliation algorithm the guys at react are working on. It should be out soon. The idea behind fiber is simple. Up until now the reconciliation process was linear. Once it started you couldn’t stop it. Fiber aims to change that. The motivation for it is priorities. We realize there are some changes that are more important than others. For example, if you have a text node that needs to be change and a heavy animation to draw. It’s important that you will draw the animation first otherwise the user will notice the ui lagging. If the text node will change in a few ms delay the user won’t notice it. Fiber has a built in interrupt mechanism. Think of it as a pause button. At any given moment during the reconciliation. React can tell fiber to pause, do other calculations such as animations and then continue. This will cause our apps to look much more fluent.

The beauty of it all is that it’s totally transparent to the developer. The reconciliation algorithm is an implementation detail. We saw that with my own lousy implementation of it. The apps worked but not very well.. The simplicity of react’s api and the complexity of its implementation is what makes it such a powerful tool.

All the code you’ve seen on this post is available here. Catch me on twitter at @ofirdagan2 if you have any questions or just to say hi :)


Build Your Own React was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Data Science in a Box With Dataiku

Par Chris Chinchilla

Data science is the new hotness, with thousands of job postings (some of which really aren’t data science), to dozens of platforms promising to help professionals in the field do their job more effectively. In typical fashion, not all these tools are new, but re-purposed for new use cases, with tools such as Python, R, and Hadoop experiencing new surges in interest thanks to the ‘new’ field of Data Science.

One of the most well conceived and cohesive tools I’ve seen is Dataiku. It aims to package together all the tools that a data scientist and the teams that work with them might need in one application.

To experiment with Dataiku you will need a decently sized dataset, I opted for the time-honored NYC Taxi trip records, but for the sanity of my laptop, used a couple of gigabytes of the data.

Dataiku consists of a handful of open-source components (many of which you might recognize), but the software is closed source bound together with proprietary code, with free and enterprise editions that you can install locally or in the cloud. For this review, I will use the Mac version of the free desktop client.

Download the application, run it, and your browser will automatically open to http://localhost:11200. Then head over to the New project section and choose one of the helpers to get you started, I chose the ‘Tutorial 101 Starting project.’

You can import data from a local or server file system, Hadoop, a variety of SQL and NoSQL sources, cloud storage providers, and further options provided by plugins. After scanning your data, Dataiku provides a preview and some options for tweaking the import and schema, then you’re ready to create your dataset by clicking the green create button.

Next, you will see the Data exploration screen where you can view, filter, sort, and analyze (provides a column based overview) your data. There are also processors for certain data types, for example, geocoding location data. You can create a wide variety of charts by dragging and dropping fields, or switching between types for a preview.

Useful so far, but you can also mix and match the GUI interface with Python, R, and SQL, if you have ever used Jupyter notebooks, then the style will be familiar to you. I’m no Python programmer, but thankfully there’s also a built-in console and debugger to help me figure out what the problem is.

For the non-coders, Dataiku offers built-in machine learning models for prediction and clustering of data, and the ability to create your own learning models and train them. Again, creating your own is a matter of clicking, dragging and selecting options, for example, I created a model to show me what taxi pickups fell on weekends and public holidays in the US.

And finally, to assemble all these components together is the workflow section where you can define which steps to run, and in what order, triggered manually, or programmatically via a REST API.

This scratches the surface of what anyone needing to process and analyze large data sets can accomplish with Dataiku, and you can find more details on their website, or listen to the interview I conducted with Claude Perdigou, a product manager with the company.

https://soundcloud.com/gregarious-mammal/dataiku-interview

Originally published at dzone.com.


Data Science in a Box With Dataiku was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

7 minute read to save $100,000 on your app idea.

Par Nitesh Agrawal

I love founders. I love the energy they bring to this world. I love how they find business problems to solve in the least (or most) expected areas. I love the diversity.

At Indiez we have helped 100s of entrepreneurs build successful products. And to help those 100s we have spoken to 600+ founders from all over the globe.

After speaking to so many founders I noticed that following attributes are common in successful founders —

Belief — As a founder, you have to believe in what you are doing. It is a startup and there will be bad times. In those bad times, your belief will push you to go the extra mile and keep going!

Persistence — Building a legendary company required extraordinary focus and effort. You’ll have to miss those Friday night parties, cut down on your sleep. You will be solving a problem that’s driving you crazy every day.

Disclaimer — Startups up are risky. I can’t tell you whether you’ll be able to make your product successful. Not yet.

Think of Airbnb in the 2000s. Would you have ever let someone unknown enter your home and sleep on your bed?

The truth is you can’t know if your product will be successful until and unless consumers start using it. Only consumers can say if your ship is going to sail!

However, you can follow a parallel path to test your idea simultaneously while building the product. Here are a few pieces of golden advice, that can potentially save you a lot of $s, energy and most importantly time.

1. Valuable free information

You must be thinking — “What are you saying? I spend a lot of time on Quora every day. I read all the tech blogs every day”. Well, that’s not enough.

This step may feel rudimentary and kinda obvious, but almost every founder miss this step.

If your idea is to build an app go to Playstore and Appstore and search with all the possible keywords. Don’t just do it on mobile because your region may restrict it to a particular geography. Do it on your desktop and see the kind of results you are getting.

If you find an application that is similar to yours go deep into it. Explore all the features, read about the company. Make notes of what’s wrong and what’s right.

Now, google various keywords related to your idea. For example, if you are building an event planner application, You can google “Best event planner applications”, “Best ways to plan my event”, “Party Planning tool”, “Wedding planning tool” and so on. You’ll find various existing businesses. Observe what are they doing right and wrong.

Now you have a list relevant businesses — Look at the traction your competitors or potential completion has. You can use products like Similarweb and Alexa to get this information.

You can also get an idea of how people find your offering by Google AdWords planner. You can simply put in keywords to know what are the terms people are searching for and how much. Record this data and keep it handy. You will need it later.

2. Look around in the space.

Now you know the various existing businesses in your space and you know your competition.

The next step is to find out more information about the space. Few questions to answer are:

- How many startups are getting funding in this space? It’s a good sign if startups are getting funded in this space. I find Crunchbase and Mattermark really helpful for this research.
- Who is getting press and what are they claiming to solve? You can look at TechCrunch, Venturebeat and TNW.
-Listen to podcasts/youtube videos of the founders of your competitors. This will help you get a sense of how are they building the business.

3. You don’t know them, but they can help

It is difficult to get honest feedback from your friends and family members. There’s always a bias. But, you can do these two tricks to get some reactions on your idea -

$10 spent on Starbucks — Walk up to a random person and say that you are starting up a new company and potentially putting in your entire life’s savings on your idea. It will be really helpful if they can give you some feedback and you can buy them a coffee in return.

Be very clear and explicit on the problem and solution. You’ll be amazed by the kind of inputs you can get from a person you don’t know.

If you’re a shy person, You can hire 10 people on Fiverr or Amazon Mechanical Turks and ask them to give a structured feedback. You’ll spend $50 in doing this, but it will help you evaluate your idea in a matter of days.

4. Polls

Online polls are amazing and are a super easy way to gather feedback. I love twitter polls for their simplicity and instantaneous reach. Best part- they are free! 😃

You must be thinking “that’s not my audience”. Well, here’s the truth — when a lot of people who have no skin in the game are telling you the same thing, you have to listen.

You can use other tools like Typeform and google forms to build a simple form and send them to your network to ask basic questions about your idea.

5. Time to start marketing

Till now you must have an idea of what you can offer to your audience. Time to test!

Build a basic landing page with clear messaging. You can use Bitblox, @LaunchRock and Unbounce (my fav) for this.

Run targeted FB advertisements that drive your potential customers to your page. I learned this trick from Blake (500 Startups Mentor).

It’s dead simple and very effective.

Kin.today cultivated 10,000 member email list for a simple calendar app. The product is still in beta and all those people signed up through the landing page.

Put in basic analytics to track how many users are visiting and how many are signing up.

6. Sharpen your messaging:

Remember what users were searching for? Let’s use that data and run a few ad campaigns to the landing page. This is a super easy way to win a potential customer who is in need of your product.

Now you have proven your concept. Good!

7. Experience Counts

Reach out to industry leaders and critics to know their thoughts, problems and gather some feedback on what you are building.

How to reach out? Cold email. Yes! Simply write to them a short and crisp message.

Most people love to get attention and get appreciated and they’d love to give you feedback.

Remember, Keep it crisp and to the point.

Above simple steps will help you potentially save $100,000 on building a team or outsourcing.

Now, be proud of yourself. You have a great proof of concept. Few potential users and have a strategic messaging in place. Most of it was free. 🙌

Marc Andreessen said -

You have removed one layer of risk.

The kind of feedback and information you’ll get from a combination of these activities is going to help you build a better business.

But, don’t forget that your passion and belief to build your idea is central to everything that you are doing. Learn from the data and move on.

Let’s build awesome.

Starting up? Do it right! Join 100+ amazing founders who built successful products with Indiez.

Know more about us here — Indiez.io


7 minute read to save $100,000 on your app idea. was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

What Industries Will the Tech Superpowers Consume Next?

Par Jedidiah Yueh

What if Apple, the world’s richest company, bought a bank? What if Alphabet (Google), the world’s most powerful data company, bought an insurance company? What if Amazon, the world’s most valuable retailer, privatized the mail service? And what if Facebook, the world’s powerful media company, bought a television network?

The world can change in the blink of an eye.

When Amazon bought Whole Foods for $13.7 billion, the threat landscape for disruption changed forever.

Within hours of the news, the stock prices of the other major grocery chains tumbled downward beneath waves of market pessimism. Amazon, however, rode the market upward, its stock price rising by over $15 billion.

The market seesaw effect made the acquisition arguably free for Bezos and company.

The Tech Superpowers, the five most valuable tech companies in the world — Apple, Alphabet, Amazon, Facebook, and Microsoft — are worth more than $3 trillion in market cap in total. That’s greater than the GDPs of Russia and Canada combined.

What else could the Tech Superpowers buy with their incredible cash holdings and the market seesaw weighted so heavily to their advantage?

The Narrow View of Disruption

For decades, entrepreneurs and executives have referred to Clayton Christensen’s Innovator’s Dilemma as the book that defined disruption.

In his book, Christensen describes how industry leaders repeatedly lose to new entrants, who build simpler solutions, targeting a small, neglected segment of the market, and then add features and move upmarket over time.

In the end, the new entrant overthrows the complacent king, who has spent too much time delivering incremental features at the behest of a few large customers, making their products too complex and ripe for disruption.

If legacy companies only had to worry about bottoms-up disruption, the world would be a much more predictable place.

The Innovator’s Dilemma Disrupted

Apple recently announced iPhone X, the future of the smartphone, a $999 phone that will likely drive enough sales to crown Apple as the first trillion-dollar company in market cap.

The original iPhone, of course, was once the quintessential proof that Christensen framed disruption far too narrowly.

Instead of targeting a small, underserved demographic, Apple went over the top.

Yes, Jobs and company built a simpler user interface, but they loaded their device with sensors, functions, and an app store that unlocked a Pandora’s box of features and capabilities. Instead of targeting the underserved, they went after the high-end of the market with a significantly higher price point than competitors.

The result? The whole world made room in their wallets to buy the iPhone. And they’ll do it again, even with Apple raising prices significantly for iPhone X.

Companies such as Tesla, Nest (acquired by Alphabet), and newer entrants like Otto have followed the over-the-top strategy by focusing on superior, not simpler products.

But industry kings need to worry about more than bottoms-up or over-the-top product disruption today. They need to worry about wholesale industry disruption.

As the Tech Superpowers run out of room to grow in their mainstay markets, they will hunger for new industries to consume. And they’ve already spread out roots across countless industries.

A Generational Banking Divide

I recently met with the president of a top ten US bank. He discounted the threat of disruption by startups or even by the Tech Superpowers. “I don’t see them signing up for the regulatory requirements of becoming a bank,” he remarked.

He did worry about one thing, however, a generational shift in behavior that could dislocate the valuable client relationships banks have built with the wealthy. Millennials — who will eventually inherit the wealth of earlier generations — don’t want relationships with bankers and wealth advisors.

They want an app.

And guess which companies are conveniently placed to give it to them?

What happens if Apple or Facebook buys a bank, like E-Trade?

E-Trade would cost Apple $14.5 billion if they paid a 28% markup (Amazon’s markup to acquire Whole Foods). Less than a 6% drop in their massive $261 billion cash bucket to establish a major beachhead into the banking industry.

Almost overnight Apple could weave full-service banking — savings, checking, and securities investments — into their phones and other iOS devices. After all, Apple, Amazon, Alphabet, and Facebook have already dipped their toes in the water with payments.

What if Facebook bought AMC Networks ($5.3 billion at a 28% markup), producers of The Walking Dead, to suck a portion of the television advertising industry down into our phones? Millennials will watch just about anything on their phones.

What if Amazon privatized the United States Postal Service? With their existing, daily delivery routes throughout metropolitan areas and their knack for automation, wouldn’t they be able to run the postal service at a consistent profit while dramatically reducing their own cost of delivery, to the benefit of taxpayers and Amazon shareholders alike?

And what if Alphabet bought Mercury Insurance ($4.2 billion at a 28% markup)? With self-driving car data from Waymo (their self-driving car subsidiary) and data coming in from Nest, Dropcam, and Google Home devices, wouldn’t they have a prohibitive data advantage in offering competitive rates and services for home and auto insurance?

In 2016, Walmart’s acquisition of Jet.com for $3 billion looked like a significant move by an industry king to check its competition. After Amazon’s acquisition of Whole Foods, however, it’s now apparent that it was too little, too late.

The innovation cycle is the wheel that turns our world, and it turns ever faster at increasing scale.

With startups disrupting from below and the potential for mega disruption by the Tech Superpowers, what will the Walmarts of the world do now?

About the Author

Jedidiah Yueh has spent two decades decoding innovation, collecting the hidden frameworks that drive many of the most successful entrepreneurs in technology today. He has personally implemented these frameworks, inventing software products that have driven more than $4 billion in sales. As founder and executive chairman of Delphix, he works with industry giants from Facebook to Walmart to drive faster internal innovation through radical improvements in data management. Previously, he was the founding CEO of Avamar, which pioneered the data deduplication market. In 2013, he was named CEO of the Year by the San Francisco Business Times. His first book will be available in October 2017—Disrupt or Die: What the World Needs to Learn from Silicon Valley to Survive the Digital Era.

If you’d like to learn more about frameworks used to drive disruptive innovation and build multi-billion dollar software products, sign up for updates on the launch of the book and get the first five chapters now.

And here’s a preview of the hardcover for the book:


What Industries Will the Tech Superpowers Consume Next? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Programming without coding

Par Febin John James

Computers have penetrated into almost every industry; shortly programming will become a mandatory skill in most industries. The new kind of jobs will require people to analyse and plot data, manipulate text, query database, and control internet of things or robots.

Currently, programmers build a layer on top of these systems; a user interface for non-programmers to interact. Since industries advance quickly, rapid changes must be made to these interfaces since they can only do things for which they are programmed. Making these changes is time-consuming and costly.

A better way is, people should be able to interact with the computer directly. Writing code is one way. But writing statements that have to be precise can be tedious, even for expert programmers. For non-programmers, this is not an easy task. Hence, we need an alternative.

Group of researchers from Stanford University may have come up, with a way to solve this problem.

Voxelurn allows you to program in the natural language. You can type in ‘add green monster’ to add a monster of color green. You can see the list of commands supported here. But, that is not the best part yet. You can make your definitions. Initially, you will need to know the core language to make things. After that, you can write your definitions. As the computer learns your definition the whole process of programming will become more natural.

I would have written a small tutorial here for your understanding. Unfortunately, their website is having issues. You can go through the above video for a demonstration.

Researchers recruited 70 users from Amazon Mechanical Turk. They were asked to make structures on voxel. The definitions created by these users were shared so that one could build from others definitions.

1.) User A creates a face structure and defines it as “Add a face.
2.) User B uses the command “Add Face” and makes a hat on top of it. Say “Add a face with a hat.”
3.) User C can use the “Add a face with a hat” definition and probably add body, hands, legs, etc.

The idea is to evolve the language with a community of users over time. As the system learned researchers noticed that users tend use naturalised language more than 85% of the time.

This research is a good step towards a future without code. You can look into the technical details here. Here’s a link to their open source repo.

Follow Hackernoon and me (Febin John James) for more stories. I am also writing a book to raise awareness on the Blue Whale Challenge, which has claimed lives of many teenagers in several countries. It is intended to help parents understand the threat of the dark web and to take actions to ensure safety of their children. The book Fight The Blue Whale is available for pre-order on Amazon. The title will be released on 25th of this month.

Programming without coding was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

كيف لا يكتشف الناس أن كلارك كينت هو سوبر مان رغم عدم ارتداء قناع؟

Par محمود حسين

أراجيك

العجيب أن سوبر مان لا يرتدي قناعاً على نقيض العدد الأكبر من الأبطال الخارقين.. فترى كيف لم يستطع أحد كشف الحقيقة؟ وكيف لم يدرك أحد أن كلارك كينت هو نفسه سوبر مان؟

أراجيك

À partir d’avant-hierDivers

How To Fix The Software Industry

Par Fagner Brack

How To Fix The Software Industry 🔊

What you can do when the industry doesn't tend to keep better practices over time

The design artwork of an empty road with a couple of traffic lights. The red lights are pointing towards the center of the road.

The software development industry doesn’t keep track of better practices over time.

Maybe it’s an educational problem: Thousands of new developers enter the industry every year through Boot Camps or self-teaching and they don’t have the fundamental knowledge to avoid common pitfalls.

Maybe it’s ageism: Companies don’t hire older people under the belief they’re outdated and that encourages the hiring of inexperienced developers.

One thing for sure is that there’s a lack of people with Tacit Knowledge for the real world.

This knowledge is one of the most helpful. Yet, it can only be acquired by experience outside academia and shared by example in the real world.

Older people tend to have more Tacit Knowledge for the real world and traditional education can’t teach it efficiently.

So what can we do?

Tacit Knowledge for the real world is one of the most helpful types of knowledge a person can share with an individual, company or project, but one of the hardest to do it.

The software industry is a human complex system. The behavior of a human complex system is defined by the individual interaction between its components. In this case, the people.

The only way to fix a human complex system is if everyone respects a Forcing Function created to control that system.

Let’s take the traffic light, for example.

The traffic light works as long as everybody respects it. If a software development discipline existed and everybody followed it consistently, then that could serve for the industry what a traffic light is for the transit. It would be a Forcing Function to ensure the complex system works as desired.

However, not everybody respects a traffic light and any attempt to implement a discipline as a Forcing Function will have tradeoffs.

If you do want to respect the traffic light, it’s easy: just follow the red, green, yellow signs and maybe a few others depending on the country. If you don’t, you can die or kill somebody else, and this alone can be a very strong motivator to do it.

However, if you want to respect a discipline it’s not that easy. There’s a curve to learn its fundamentals and acquire the Tacit Knowledge to apply it. Also, if you don’t respect it, you’ll have the advantage of producing faster in the beginning and to be paid high figures without the need to learn much, and this alone is a strong motivator to not do it.

If you work in a critical service, such as surgery machine, air traffic control or police enforcement system and don't have the Tacit Knowledge necessary to ensure what you produce works very well in the real world, you might do the work at the expense of risking other people’s lives. That can pose a moral long-term problem that today you can simply ignore.

If you don’t respect the traffic light, you’ll die. If you don’t respect a software development discipline, you’ll have an advantage.

The government regulates how the traffic light works and apply fines for those who don't obey. Even though everybody will tend to obey anyway due to the obvious morbid outcomes, a traffic light is easy to regulate. There are many years of trial and error that helped society to understand how it works.

The software industry is evolving every day and is by no means simple. It will be very hard to regulate a discipline for it. Besides, traditional culture present in society tends to poison software development. It’s unlikely the same type of regulation will work.

People tend to respect rules where breaking them can be dangerous to oneself or to others in the short term (like crashing the car by crossing the red light). Respecting rules that can be dangerous to the big picture requires a long-term mindset not many people have.

This is what you can do today to help to fix the software industry in the long-term:

  • Teach kids to program, so that when they get to the industry they’ll learn the fundamentals earlier instead of having to learn everything from scratch.
  • Always try to experiment and innovate on top of known patterns, so that you can help improve whatever may be chosen as a discipline in the future.
  • Show the real world business benefits of your ideas, so that companies can understand the long-term cost of negligence.

Maybe someday software developers will start to influence society outside their own communities.

Until then we can only wonder what’s the most effective solution for all these complex problems.

One thing for sure is that we’re not dealing with simple rules for the integration between cars and traffic lights.

We are dealing with the people responsible to build the core engine that powers it all.

Thanks for reading. If you have some feedback, reach out to me on Twitter, Facebook or Github.


How To Fix The Software Industry was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

❌