Archive

Archive for November, 2015

Web Development Reading List #113: Anticipatory Design, SVG Optimization and Native DOM Selection Tricks

November 20th, 2015 No comments

What’s going on in the industry? What new techniques have emerged recently? What insights, tools, tips and tricks is the web design community talking about? Anselm Hannemann is collecting everything that popped up over the last week in his web development reading list so that you don’t miss out on anything. The result is a carefully curated list of articles and resources that are worth taking a closer look at. — Ed.

Zick-zack line endings with CSS

Autumn is nearly over, winter is coming to Germany and on the weekend the forecast predicted the first snow for the Bavarian Alps which is near where I live. Time to read about Service Workers, and some abstract topics like teaching complex algorithms, and how to be more objective.

The post Web Development Reading List #113: Anticipatory Design, SVG Optimization and Native DOM Selection Tricks appeared first on Smashing Magazine.

Categories: Others Tags:

Apple Gives the Pencil A Makeover – Interview with Jonathan Ive

November 20th, 2015 No comments
Apple-Pen

In his latest interview Jonathan Ive, Apple’s Chief Design Officer, talks about the development of the recently famous, Apple Pencil. He mentions that despite designing the UI for their touch screen devices to be primarily multi-touch, based on the user’s fingers; they found that there was still a large group of people that wanted a pencil-like instrument. They missed the ability to paint and draw in ways that you simply cannot with your finger. In solving this problem, apple developed the Apple Pencil which lead to the development of new technologies with compatible capabilities like the iPad Pro.

Apple Pencil

Apple Pencil1

Jonathan continued by sharing that Apple’s effort is to create a more natural experience. He encourages other tech companies to design with the same thoughts in mind, to observe tiny details of how and why user do what they do. When asked why the new tool was named a pencil and not a stylus he explained that a stylus is more a piece of technology, whereas a pencil has more of an analogue association. One of the technologies within the Pencil within the pencil was developed to create a natural experience, it can detect pressure as well as the angle in which it is held. It was fundamentally designed to be more a natural instrument.

Apple Pencil2

“I actually think it’s very clear the Pencil is for making marks, and the finger are a fundamental point of interface for everything within the operating system. And those are two very different activities with two very different goals.”

Apple Pencil3

The Apple pencil hopes to facilitate the process of seamlessly communicating ideas through drawings. Allowing designer the freedom of using sketches, abstract details, or marks as part of a conversation instead of just words.

Read the full interview on wallpaper.com

Read More at Apple Gives the Pencil A Makeover – Interview with Jonathan Ive

Categories: Designing, Others Tags:

Increase Your Productivity: How to Write Faster

November 20th, 2015 No comments

Those who like to write a lot might have already asked themselves the question whether by any means they could have written more posts in the same amount of time. This question, driven by the desire to increase one’s productivity, is not just interesting, but quite important to people that have to write a lot in their job. For freelance journalists, it is even a determining factor of their honorarium. The more they can produce in a given amount of time, the more money they will earn – implying a fitting commission situation. In this article, I will give you some advice that originates from daily practice and might help you to spend less time per written article.

A Few Words at the Beginning

Some of the following productivity tips might sound a little profane, and you might doubt if they can help. However, I assure you that every single one of these tips stems directly from practice and can help you to write faster as well. Often the most obvious things are overlooked too quickly.

0. Write Down all Article Ideas

In a lot of cases, you forget great ideas for articles because you didn’t note them immediately. Not only if this rings true to you, should you create an idea book that you keep with you at all times. When sitting down to write an article, you will instantly have a great idea to prepare for and elaborate on.

1. Prepare Yourself Well

Before starting to write, collect all the significant sources for your planned article. Think about the structure of the post for a couple minutes. Also consider possibly needed image material, download it and store it on your desktop for later use.

2. Set a (Daily) Deadline

When you set a time limit for yourself, you will automatically be done faster as you have a temporal aim. You know when you need to be done, and thus, you will hurry up more than you would without said time limit. You surely heard about that law that states that any task will always take up the entire time slot there is for getting it done. Narrow down the time slice and you’ll see that it works.

3. Start with the Second Paragraph

The first paragraph of a blog post is always crucial. It is the lead paragraph and used to describe the topic of the article. This is why many people take a lot of time trying to write a good first paragraph. Start with the second paragraph and write the first one at the end so you are still in the “flow” which allows you to complete the text faster.

4. Productivity Enforcer: Write Against the Clock

This works great for me. I use a timer or countdown app while writing. Before starting, I think of a period of time in which the respective article has to be finished, then begin the countdown. After the set time has passed, the app emits a loud sound. I am still working on getting things done before the sound reaches my ears. But I have already improved, so try it for yourself.

Timer App for Windows | Timer App for Mac OS X

5. Find Your Most Productive Hours

Not every human can be on the same level of creativity and pace at any time. Some people – among them an impressive amount of top managers – are the most productive in the early morning. Get up early when it’s easy for you to write in the morning. When you’re getting fit in the evening, use those hours for writing. It’s basically totally up to you. Write the night through if you can. But get in line with your own rhythm.

6. Write About Things you Like

Many people are faster writing about things they like than writing about things they don’t. I can relate to this phenomenon looking at myself. Articles on topics that I like are done faster then texts on subjects that I’m not fond of.

7. Keep Your Texts Short

While writing, ask yourself the question whether the sentence can be made more comprehensible with less words. If possible, write short sentences. Imagining the necessity to write every sentence for Twitter with only 140 characters can be helpful as well. Keep in mind: If a story can be told in 3 paragraphs, you should tell it in 3 paragraphs. Does it not sound great to be able to reach a state of increased productivity by doing less?

8. Write, Don’t Edit

Just start writing and don’t worry about your spelling; just write until you’re done. It has proven to be more efficient and faster to correct spelling and expressions after the article is completed. Also, after the article is done, it’s time to recheck facts and add further sources if necessary.

9. Turn Off all Disturbing Factors

Close the door, mute your telephone and your smartphone. The email client should also stay closed. When you can not get disturbed, it is easier to reach the “tunnel condition” aka “the flow”; a condition, in which writing ideally happens by itself.

10. Try Voice Recognition Software

Many humans can talk a lot faster than they can type. That’s why the idea of using a voice recognition software to create your articles much more quickly isn’t that far fetched. Jon Morrow from Copyblogger uses this kind of software for every text he has to write. I have to admit that I failed at every attempt to get that working for me, however. I already spent hundreds of dollars on software such as Naturally Speaking and the likes but always returned to profane typing after a very short period.

11. Don’t be a Perfectionist

Perfectionists have a hard time in the writing business. The more often they read a text, the more they feel the desire to change the text over and over. I recommend not doing that. Stand for what you’ve written. The perfect text doesn’t exist.

12. Bonus: A Concluding Infographic on Productivity

Eight secrets that (are supposed to) make you write faster.

Increase Your Productivity: How to Write Faster
Image Source: 8 Secrets to Writing Faster Blog Posts

Conclusion

In today’s article, we have given you 11 good productivity tips and an infographic to help you write faster and more efficiently. For me at least, these tips prove helpful on a daily basis. As a result, I can now do almost twice the work in the same period. My productivity has increased big time. What advice works especially well for you?

Related Links

(dpe)

Categories: Others Tags:

Testing with Data

November 20th, 2015 No comments

It’s not a coincidence that this is coming off the heels of Dave Paquette’s post on GenFu and Simon Timms’ post on source control for databases in the same way it was probably not a coincidence that Hollywood released three body-swapping movies in the 1987-1988 period (four if you include Big).

I was asked recently for some advice on generating data for use with integration and UI tests. I already have some ideas but asked the rest of the Western Devs for some elucidation. My tl;dr version is the same as what I mentioned in our discussion on UI testing: it’s hard. But manageable. Probably.

The solution needs to balance a few factors:

  • Each test must start from a predictable state
  • Creating that predictable state should be fast as possible
  • Developers should be able to figure out what is going on by reading the test

The two options we discussed both assume the first factor to be immutable. That means you either clean up after yourself when the test is finished or you wipe out the database and start from scratch with each test. Cleaning up after yourself might be faster but has more moving parts. Cleaning up might mean different things depending on which step you’re in if the test fails.

So given that we will likely re-create the database from scratch before each and every test, there are two options. My current favourite solution is a hybrid of the two.

Maintain a database of known data

In this option, you have a pre-configured database. Maybe it’s a SQL Server .bak file that you restore before each test. Maybe it’s a GenerateDatabase method that you execute. I’ve done the latter on a Google App Engine project, and it works reasonably well from an implementation perspective. We had a class for each domain aggregate and used dependency injection. So adding a new test customer to accommodate a new scenario was fairly simple. There are a number of other ways you can do it, some of which Simon touched on in his post.

We also had it set up so that we could create only the customer we needed for that particular test if we needed to. That way, we could use a step likeGiven I'm logged into 'Christmas Town' and it would set up only that data.

There are some drawbacks to this approach. You still need to create a new class for a new customer if you need to do something out of the ordinary. And if you need to do something only slightly out of the ordinary, there’s a strong tendency to use an existing customer and tweak its data ever so slightly to fit your test’s needs, other tests be damned. With these tests falling firmly in the long-running category, you don’t always find out the effects of this until much later.

Another drawback: it’s not obvious in the test exactly what data you need for that specific test. You can accommodate this somewhat just with a naming convention. For example,Given I'm logged into a company from India, if you’re testing how the app works with rupees. But that’s not always practical. Which leads us to the second option.

Create an API to set up the data the way you want

Here, your API contains steps to fully configure your database exactly the way you want. For example:

Given I have a company named "Christmas Town" owned by "Jack Skellington"
And I have 5 product categories
And I have 30 products
And I have a customer
...

You can probably see the major drawback already. This can become very verbose. But on the other hand, you have the advantage of seeing exactly what data is included which is helpful when debugging. If your test data is wrong, you don’t need to go mucking about in your source code to fix it. Just update the test and you’re done.

Also note the lack of specifics in the steps. Whenever possible, I like to be very vague when setting up my test data. If you have a good framework for generating test data, this isn’t hard to do. And it helps uncover issues you may not account for using hard-coded data (as anyone named D’Arcy O’Toole can probably tell you).


Loading up your data with a granular API isn’t realistic which is why I like the hybrid solution. By default, you pre-load your database with some common data, like lookup tables with lists of countries, currencies, product categories, etc. Stuff that needs to be in place for the majority of your tests.

After that, your API doesn’t need to be that granular. You can use something likeGiven I have a basic company which will create the company, add an owner and maybe some products and use that to test the process for creating an order. Under the hood, it will probably use the specific steps.

One reason I like this approach: it hides only the details you don’t care about. When you sayGiven I have a basic company and I change the name to "Rick's Place", that tells me, “I don’t care how the company is set up but the company name is important”. Very useful to help narrow the focus of the test when you’re reading it.

This approach will understandably lead to a whole bunch of different methods for creating data of various sizes and coarseness. And for that you’ll need to…

Maintain test data

Regardless of your method, maintaining your test data will require constant vigilance. In my experience, there is a tremendous urge to take shortcuts when it comes to test data. You’ll re-use a test company that doesn’t quite fit your scenario. You’ll alter your test to fit the data rather than the other way around. You’ll duplicate a data setup step because your API isn’t discoverable.

Make no mistake, maintaining test data is work. It should be treated with the same respect and care as the rest of your code. Possibly more so since the underlying code (in whatever form it takes) technically won’t be tested. Shortcuts and bad practices should not be tolerated and let go because “it’s just test data”. Fight the urge to let things slide. Call it out as soon as you see it. Refactor mercilessly once you see opportunities to do so.

Don’t be afraid to flip over a table or two to get your point across.

– Kyle the Unmaintainable

Categories: Others, Programming Tags:

Free download: Elegant Vector Kit

November 19th, 2015 No comments

The Elegant Vector Kit is a set of beautifully rendered workspace elements, designed by Nasti Funny. Great for creating hero images, and desktop illustrations, many of the elements are also useful for design mockups.

The set includes 60 workspace elements, ranging from coffee cups to Apple watches.

The tech on offer looks like an incredible Christmas stocking for some lucky designer, included are: 6 Apple Watches, clipped and unclipped; 6 iPad Minis, in three colors, front and back; 5 iPhones, in different colors, front and back; an iMac, with keyboard, mouse, and trackpad; GoPro Hero4 with smart remote; 3 headphones, from Marshal, Beoplay, and Apple; Canon 5D Mark III; 3 Moleskine sketchbooks; assorted stationery; and even a can of coke; plus lots more!

The Beoplay headphones and the Wacom Tablet are definitely items we’d like to have on our desks.

The images are supplied in 5 different formats: AI, EPS, PSD, PNG, and SVG. And the full set is free for personal and commercial use.

Download the files beneath the preview:

Please enter your email address below and click the download button. The download link will be sent to you by email, or if you have already subscribed, the download will begin immediately.

LAST DAY: Going Retro! 150 Vintage Vector Illustrations – only $9!

Source

Categories: Designing, Others Tags:

To ECC or Not To ECC

November 19th, 2015 No comments

On one of my visits to the Computer History Museum – and by the way this is an absolute must-visit place if you are ever in the San Francisco bay area – I saw an early Google server rack circa 1999 in the exhibits.

Not too fancy, right? Maybe even … a little janky? This is building a computer the Google way:

Instead of buying whatever pre-built rack-mount servers Dell, Compaq, and IBM were selling at the time, Google opted to hand-build their server infrastructure themselves. The sagging motherboards and hard drives are literally propped in place on handmade plywood platforms. The power switches are crudely mounted in front, the network cables draped along each side. The poorly routed power connectors snake their way back to generic PC power supplies in the rear.

Some people might look at these early Google servers and see an amateurish fire hazard. Not me. I see a prescient understanding of how inexpensive commodity hardware would shape today’s internet. I felt right at home when I saw this server; it’s exactly what I would have done in the same circumstances. This rack is a perfect example of the commodity x86 market D.I.Y. ethic at work: if you want it done right, and done inexpensively, you build it yourself.

This rack is now immortalized in the National Museum of American History. Urs Hölzle posted lots more juicy behind the scenes details, including the exact specifications:

  • Supermicro P6SMB motherboard
  • 256MB PC100 memory
  • Pentium II 400 CPU
  • IBM Deskstar 22GB hard drives (×2)
  • Intel 10/100 network card

When I left Stack Exchange (sorry, Stack Overflow) one of the things that excited me most was embarking on a new project using 100% open source tools. That project is, of course, Discourse.

Inspired by Google and their use of cheap, commodity x86 hardware to scale on top of the open source Linux OS, I also built our own servers. When I get stressed out, when I feel the world weighing heavy on my shoulders and I don’t know where to turn … I build servers. It’s therapeutic.

I like to give servers a little pep talk while I build them. “Who’s the best server! Who’s the fastest server!”

— Jeff Atwood (@codinghorror) November 16, 2015

Don’t judge me, man.

But more seriously, with the release of Intel’s latest Skylake architecture, it’s finally time to upgrade our 2013 era Discourse servers to the latest and greatest, something reflective of 2016 – which means building even more servers.

Discourse runs on a Ruby stack and one thing we learned early on is that Ruby demands exceptional single threaded performance, aka, a CPU running as fast as possible. Throwing umptazillion CPU cores at Ruby doesn’t buy you a whole lot other than being able to handle more requests at the same time. Which is nice, but doesn’t get you speed per se. Someone made a helpful technical video to illustrate exactly how this all works:

This is by no means exclusive to Ruby; other languages like JavaScript and Python also share this trait. And Discourse itself is a JavaScript application delivered through the browser, which exercises the mobile / laptop/ desktop client CPU in a big way. Mobile devices reaching near-parity with desktop performance in single threaded performance is something we’re betting on in a big way with Discourse.

So, good news! Although PC performance has been incremental at best in the last 5 years, between Haswell and Skylake, Intel managed to deliver a respectable per-thread performance bump. Since we are upgrading our servers from Ivy Bridge (very similar to the the i7-3770k), the generation before Haswell, I’d expect a solid 33% performance improvement at minimum.

Even worse, the more cores they pack on a chip, the slower they all go. From Intel’s current Xeon E5 lineup:

  • E5-1680 ? 8 cores, 3.2 Ghz
  • E5-1650 ? 6 cores, 3.5 Ghz
  • E5-1630 ? 4 cores, 3.7 Ghz

Which brings me to the following build for our core web tiers, which optimizes for “lots of inexpensive, fast boxes”

2013 2016
Xeon E3-1280 V2 Ivy Bridge 3.6 Ghz / 4.0 Ghz quad-core ($640)

SuperMicro X9SCM-F-O mobo ($190)

32 GB DDR3-1600 ECC ($292)

SC111LT-330CB 1U chassis ($200)

Samsung 830 512GB SSD ×2 ($1080)

1U Heatsink ($25)
i7-6700k Skylake 4.0 Ghz / 4.2 Ghz quad-core ($370)

SuperMicro X11SSZ-QF-O mobo ($230)

64 GB DDR4-2133 ($520)

CSE-111LT-330CB 1U chassis ($215)

Samsung 850 Pro 1TB SSD ×2 ($886)

1U Heatsink ($20)
$2,427 $2,241
31w idle, 87w BurnP6 load 14w idle, 81w BurnP6 load

So, about 10% cheaper than what we spent in 2013, with 2× the memory, 2× the storage (probably 50-100% faster too), and at least ~33% faster CPU. With lower power draw, to boot! Pretty good. Pretty, pretty, pretty, pretty good.

(Note that the memory bump is only possible thanks to Intel finally relaxing their iron fist of maximum allowed RAM at the low end; that’s new to the Skylake generation.)

One thing is conspicuously missing in our 2016 build: Xeons, and ECC Ram. In my defense, this isn’t intentional – we wanted the fastest per-thread performance and no Intel Xeon, either currently available or announced, goes to 4.0 GHz with Skylake. Paying half the price for a CPU with better per-thread performance than any Xeon, well, I’m not going to kid you, that’s kind of a nice perk too. So what is ECC all about?

Error-correcting code memory (ECC memory) is a type of computer data storage that can detect and correct the most common kinds of internal data corruption. ECC memory is used in most computers where data corruption cannot be tolerated under any circumstances, such as for scientific or financial computing.

Typically, ECC memory maintains a memory system immune to single-bit errors: the data that is read from each word is always the same as the data that had been written to it, even if one or more bits actually stored have been flipped to the wrong state. Most non-ECC memory cannot detect errors although some non-ECC memory with parity support allows detection but not correction.

It’s received wisdom in the sysadmin community that you always build servers with ECC RAM because, well, you build servers to be reliable, right? Why would anyone intentionally build a server that isn’t reliable? Are you crazy, man? Well, looking at that cobbled together Google 1999 server rack, which also utterly lacked any form of ECC RAM, I’m inclined to think that reliability measured by “lots of redundant boxes” is more worthwhile and easier to achieve than the platonic ideal of making every individual server bulletproof.

Being the type of guy who likes to question stuff… I began to question. Why is it that ECC is so essential anyway? If ECC was so important, so critical to the reliable function of computers, why isn’t it built in to every desktop, laptop, and smartphone in the world by now? Why is it optional? This smells awfully… enterprisey to me.

Now, before everyone stops reading and I get permanently branded as “that crazy guy who hates ECC”, I think ECC RAM is fine:

  • The cost difference between ECC and not-ECC is minimal these days.
  • The performance difference between ECC and not-ECC is minimal these days.
  • Even if ECC only protects you from rare 1% hardware error cases that you may never hit until you literally build hundreds or thousands of servers, it’s cheap insurance.

I am not anti-insurance, nor am I anti-ECC. But I do seriously question whether ECC is as operationally critical as we have been led to believe, and I think the data shows modern, non-ECC RAM is already extremely reliable.

First, let’s look at the Puget Systems reliability stats. These guys build lots of commodity x86 gamer PCs, burn them in, and ship them. They helpfully track statistics on how many parts fail either from burn-in or later in customer use. Go ahead and read through the stats.

For the last two years, CPU reliability has dramatically improved. What is interesting is that this lines up with the launch of the Intel Haswell CPUs which was when the CPU voltage regulation was moved from the motherboard to the CPU itself. At the time we theorized that this should raise CPU failure rates (since there are more components on the CPU to break) but the data shows that it has actually increased reliability instead.

Even though DDR4 is very new, reliability so far has been excellent. Where DDR3 desktop RAM had an overall failure rate in 2014 of ~0.6%, DDR4 desktop RAM had absolutely no failures.

SSD reliability has dramatically improved recently. This year Samsung and Intel SSDs only had a 0.2% overall failure rate compared to 0.8% in 2013.

Modern commodity computer parts from reputable vendors are amazingly reliable. And their trends show from 2012 onward essential PC parts have gotten more reliable, not less. (I can also vouch for the improvement in SSD reliability as we have had zero server SSD failures in 3 years across our 12 servers with 24+ drives, whereas in 2011 I was writing about the Hot/Crazy SSD Scale.) And doesn’t this make sense from a financial standpoint? How does it benefit you as a company to ship unreliable parts? That’s money right out of your pocket and the reseller’s pocket, plus time spent dealing with returns.

We had a, uh, “spirited” discussion about this internally on our private Discourse instance.

This is not a new debate by any means, but I was frustrated by the lack of data out there. In particular, I’m really questioning the difference between “soft” and “hard” memory errors:

But what is the nature of those errors? Are they soft errors – as is commonly believed – where a stray Alpha particle flips a bit? Or are they hard errors, where a bit gets stuck?

I absolutely believe that hard errors are reasonably common. RAM DIMMS can have bugs, or the chips on the DIMM can fail, or there’s a design flaw in circuitry on the DIMM that only manifests in certain corner cases or under extreme loads. I’ve seen it plenty. But a soft error where a bit of memory randomly flips?

There are two types of soft errors, chip-level soft error and system-level soft error. Chip-level soft errors occur when the radioactive atoms in the chip’s material decay and release alpha particles into the chip. Because an alpha particle contains a positive charge and kinetic energy, the particle can hit a memory cell and cause the cell to change state to a different value. The atomic reaction is so tiny that it does not damage the actual structure of the chip.

Outside of airplanes and spacecraft, I have a difficult time believing that soft errors happen with any frequency, otherwise most of the computing devices on the planet would be crashing left and right. I deeply distrust the anecdotal voodoo behind “but one of your computer’s memory bits could flip, you’d never know, and corrupted data would be written!” It’d be one thing if we observed this regularly, but I’ve been unheathily obsessed with computers since birth and I have never found random memory corruption to be a real, actual problem on any computers I have either owned or had access to.

But who gives a damn what I think. What does the data say?

A 2007 study found that the observed soft error rate in live servers was two orders of magnitude lower than previously predicted:

Our preliminary result suggests that the memory soft error rate in two real production systems (a rack-mounted server environment and a desktop PC environment) is much lower than what the previous studies concluded. Particularly in the server environment, with high probability, the soft error rate is at least two orders of magnitude lower than those reported previously. We discuss several potential causes for this result.

A 2009 study on Google’s server farm notes that soft errors were difficult to find:

We provide strong evidence that memory errors are dominated by hard errors, rather than soft errors, which previous work suspects to be the dominant error mode.

Yet another large scale study from 2012 discovered that RAM errors were dominated by permanent failure modes typical of hard errors:

Our study has several main findings. First, we find that approximately 70% of DRAM faults are recurring (e.g., permanent) faults, while only 30% are transient faults. Second, we find that large multi-bit faults, such as faults that affects an entire row, column, or bank, constitute over 40% of all DRAM faults. Third, we find that almost 5% of DRAM failures affect board-level circuitry such as data (DQ) or strobe (DQS) wires. Finally, we find that chipkill functionality reduced the system failure rate from DRAM faults by 36x.

In the end, we decided the non-ECC RAM risk was acceptable for every tier of service except our databases. Which is kind of a bummer since higher end Skylake Xeons got pushed back to the extra-fancy Purley platform upgrade in 2017. Regardless, we burn in every server we build with a complete run of memtestx86 and overnight prime95/mprime, and you should too. There’s one whirring away through endless memory tests right behind me as I write this.

I find it very, very suspicious that ECC – if it is so critical to preventing these random, memory corrupting bit flips – has not already been built into every type of RAM that we ship in the ubiquitous computing devices all around the world as a cost of doing business. But I am by no means opposed to paying a small insurance premium for server farms, either. You’ll have to look at the data and decide for yourself. Mostly I wanted to collect all this information in one place so people who are also evaluating the cost/benefit of ECC RAM for themselves can read the studies and decide what they want to do.

Please feel free to leave comments if you have other studies to cite, or significant measured data to share.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Categories: Others, Programming Tags:

T-Mobile’s Fierce Color Ownership

November 19th, 2015 No comments
Angry-Magenta

“Stop using magenta, or else.” is pretty much the message received by OXY, a smartwatch company, from a notice of “threatened opposition” from T-Mobile’s parent company Deutsche Telekom AG. OXY received the letter containing this message’s days before the the company was about to receive its official trademark.

T-Mobile Fight Magenta
At first it didn’t make sense why T-Mobile would got after Oxy and demand that they not use the color magenta in their branding. Oxy is after all a completely different market relating to wearable and not the telecom industry. However, after reviewing how many time T-Mobile has threatened other tech companies against using magenta in their products or marketing, it all started to make sense. Back in 2008 the telecom company threatened Engadget Mobile, a tech blog, for its use of magenta and again in 2013 sued AT&T’s Aio Wireless for using too similar a color scheme.

Nemo Seagulls – MINE!

“Since we didn’t have the financial resources to fight Deutsche Telekom AG on this matter, and because we also didn’t want to just ignore them, we basically had two options left: We could either negotiate with Telekom to find a price for using the old logo, or we could change everything,” – Raf, OXY

Unfortunately OXY was forced to modify over 25K image files and designed work to avoid any legal problems. Below is OXY’s new logo and identity colors.

Read More at T-Mobile’s Fierce Color Ownership

Categories: Designing, Others Tags:

Web Design Inspiration: El Burro – Mexican Street Food

November 19th, 2015 No comments
El-Burro-Mexican-Food

Today we are traveling (online) all the way to Frogner, Norway to bring you the latest website inspiration. You wouldn’t think to find inspiration for a bright colored website with Mexican flair in the residential district in Oslo, Norway, but I did. El Burro.no‘s delightful website has a colorful simplicity worth appreciating.

What to love:

The mix of bright colors

El Burro 1

Fun, animated icons throughout the site

El-Burro-icons

The gradual color changing background as you scroll

El Burro2

The display of great photography featuring delicious food

El Burro6El Burro7El Burro8

Check out El Burro.no and stop by for some high-rated Mexican food ext time you are in Oslo, Norway. Share your latest colorful website inspirations in the comments below.

Read More at Web Design Inspiration: El Burro – Mexican Street Food

Categories: Designing, Others Tags:

Drupal 8 Released With a Powerful New Suite of Tools

November 19th, 2015 No comments
Drupal-8

Drupal 3

The popular open source CMS just released their latest update Drupal 8. Their latest release hope to create better user experiences for anyone using the CMS for their business or personal website. Drupal 8 features a whole new suite of tools and capabilities, including native support for integrations, API-first publishing and better performance and scalability. In addition Drupal now includes enhanced testing with KernelTestBase, for quick API testing of how well various components are integrated.

As a thank you for their contribution of over 3K people and 1,228 companies, Drupal is sharing its success with its large community using the #celer8D8.

Drupal 1
Visit the site to download the latest version or click here to demo the platform. Share your experience in the comments below.

Read More at Drupal 8 Released With a Powerful New Suite of Tools

Categories: Designing, Others Tags:

Interview: Mike McDerment, co-founder and CEO of FreshBooks

November 19th, 2015 No comments
Mike-McDerment

Mike-McDermentI had the pleasure of interviewing Mike McDerment, the co-founder and CEO of FreshBooks, for Web Design Ledger.

FreshBooks is the #1 cloud-based accounting software designed exclusively for service-based small business owners, with more than 10 million users in over 120 countries. Mike has spent the last decade making accounting software accessible to small businesses and is the co-author of Breaking the Time Barrier, which helps professionals better price their services, and has seen more than 250,000 downloads since its release in 2013. A lover of the outdoors, Mike has been bitten so many times he is reportedly the first human to have developed immunity to mosquitoes.

Freshbooks2

Can you share a little about yourself and some history about how you got into design/tech work?

I started running events and building websites those events. Then my event caterer asked me to build him a website as well. After a while, I effectively started building websites for other people. The lead me to learn to program. I built a simple program to bill my clients with. This program eventually turned into FreshBooks and I ended up spending 3 ½ years in my parents basement to bring it to life.

As a creative, what inspired you to create FreshBooks? Was there any specific problem you set out to solve with it?

First of all, it was super time consuming to create invoices and bill clients. I was pretty inconsistent about it, I never knew where I stood and how much money people owed me. I had to do what I like to call, “forensic accounting” spent time doing lots of research checking my files and comparing them with my accounts. The worst was the thought of “I just don’t know” rattling in my brain keeping me up at night. because I didn’t know who owed me money and when it was supposed to come in.

What were some of the biggest challenges when first launching FreshBooks? Was it a difficult transition to juggle your design agency and FreshBooks at the same time?

I think one of the biggest challenges for me was the transition from two to three-dimensional design. Building products is very different from building an email marketing asset or direct mail piece there are a lot of other considerations. I knew how to get small business owners to market themselves, but not how to build a product company for myself. This was more like building a platform along with all these other things; it was all of series of steps and learning curves.

The good news is, what we lacked in knowledge, we made up for in the passion for what we were doing and the customers we had.

Was there ever a turning point that made you realize the significance of FreshBooks? When did you finally consider FreshBooks more than just a little project & start looking to hire a full-time team?

There were steps along the way. At the beginning, it was me and another guy. Within 2-3 months we realized this could be something interesting. Back in 2003 we didn’t know what we were doing and there wasn’t much online on how to build a company like this but believed this could be a real company. We had a consulting company on the side while we fired up FreshBooks and built it from there.

Could you explain your thoughts on “company culture” and how it can be fostered in a work environment? Do you feel that FreshBooks has its own unique culture?

We have this belief that it could be something, and when you act on a belief you don’t really know. I would say our “company culture” is very customer service oriented, we learned so much by listening to what our customer was saying. We designed a survey for our clients. Some of the feedback we received from a client was that using FreshBooks had changed his behavior; “I save time and get paid faster because I send my invoices sooner.” I immediately thought, wow that sounds like a great endorsement that really matters!

Mike-Quote

What are some vital management & marketing concepts for a successful company that designers/developers may not understand?

Research is a good one and a good marketing exercise. We started out by doing incredible customer service, by virtue of doing those things we got a within closer proximity to our customer, the closer you can be the better. The danger is to be too far removed from your customers and pursue your own unrealistic view. Constant interaction with the customer is really important. There is nothing like talking to the people that love you the most, to learn more about what you should do next.

As a founder who knows how to write code, do you think it’s advantageous for anyone launching their own product to learn how to build it themselves?

There is enormous value in trying to and build something for yourself. You start to understand some of the strength or limitations of what you do when you roll up your sleeves and do it yourself. I haven’t written any software for the company in over a decade. But knowing all the technical underpinnings proves invaluable for making good decisions today even if I am not the one doing the work.

How does one learn to delegate their workload and decide which tasks should be given to others? Is this a difficult process when growing a company to add real employees?

First thing, You can’t do it alone. It takes a team. The second thing is, and this was a revelation to me when I figured it out, there are people who love to do the things you hate. It is important to understand your strength and what you can do easily. There are things I am not efficient at, wich others actually enjoy doing these things and, therefore, are better at it than I am. Third, there was a challenge in getting out of certain areas of the business, having the patience to teach others to build their capabilities to your standard is important. Finally, you can’t absolve yourself of responsibility. It is important for a product to have to support but at the same time not metalling while helping people steer to the promise land.

What kept you motivated in the early days of FreshBooks, and what keeps you actively engaged in the company over a decade later?

Our customers and their feedback was huge. I love the day we launch something new! That’s the creative in me, I want to solve the world’s problems. Those things are still just as exciting.

What was the general process of getting the FreshBooks mobile app built & launched in the App Store? Did you take away any major lessons from that project?

We started our first mobile app way too early. Some of the tooling and best practices had not been created yet and the process was very expensive. We then got it right and build an award winner application. I don’t know if it has anything to do with the App Store, the hardest problem is being clear on what constraints you are building for yourself. Meeting those timelines and staying focused. That hard, but if you can stay focused and deliver you can take yourself wherever you’d like to go.

If you could travel back in time and give your younger self one piece of advice, what would it be?

There are so many new tools now that can enable you in ways that were not possible before. Mobile development is expensive and challenging, but I think we got a lot of stuff right at the beginning. I think anointing a design dictator who can make the last call on a design is very important because otherwise you can have log jams. This person has to be able to listen to others and not fall in love with their design. I believe design by consensus is a recipe for mediocrity. Simply having someone appointed with a design title is not the way to go about it. The kinds of instincts you need to be a great design dictator, those don’t come with a job title.

Download Mike’s Book for Creatives:

Learn how to charge what you’re really worth
Read this book and find out how you can earn twice as much as you do today.

Read More at Interview: Mike McDerment, co-founder and CEO of FreshBooks

Categories: Designing, Others Tags: